patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11858089 | DESCRIPTION OF THE EMBODIMENTS FIG.2is a top view of a polishing layer of a polishing pad of an embodiment of the invention.FIG.3is a cross-sectional schematic diagram along line I-I′ inFIG.2. Specifically, line I-I′ inFIG.2is disposed along a radius direction, i.e.,FIG.3is a cross-sectional schematic diagram along a radius direction. Referring to bothFIG.2andFIG.3, in the present embodiment, a polishing layer100of a polishing pad has a surface pattern102, and the surface pattern102is obtained by the transfer of a pattern in a mold forming the polishing layer100, wherein the surface pattern102has a plurality of grooves104and a plurality of polishing portions106, and each of the grooves104is disposed between every two adjacent polishing portions106. In other words, in the cross section of line I-I′ along the radius direction, the grooves104and the polishing portions106are staggered with one another. Moreover, in the present embodiment, the surfaces of the top of the polishing portions106are coplanar and all surfaces of top of the polishing portion form a polishing surface PS. Specifically, when a polishing process is performed on an object using the polishing layer100, the object is in contact with the polishing surface PS. In the present embodiment, the polishing layer100includes a body layer110and a surface layer112disposed on the surface of the body layer110. Specifically, in the present embodiment, the body layer110and the surface layer112are formed by the same polymer material, such as polyester, polyether, polyurethane, polycarbonate, polyacrylate, polybutadiene, or other polymer materials synthesized by a suitable thermosetting resin or thermoplastic resin, but the invention is not limited thereto. More specifically, in the present embodiment, although some differences exist between the body layer110and the surface layer112, the body layer110and the surface layer112are formed by the same polymer material, and therefore the polishing process is not significantly affected. Moreover, a break-in process performed before the use of polishing pad can remove the surface layer112disposed on the top surface of polishing layer100, so that the contact surface between the polishing layer100of polishing pad and the object is more uniform during the polishing process. More specifically, the surface layer112is only remained at a side of the polishing portions106as the polishing layer100is worn in the polishing process. During the manufacturing process of the polishing layer100, the surface of the polishing layer100directly contacts with the mold or adjacent space, and therefore some differences exist between the portion of the adjacent surface of the polishing layer100and the other portions, wherein the portion of the adjacent surface forms the surface layer112, the other portions form the body layer110, and the method of manufacturing the polishing layer100is described in more detail later. For instance, in an embodiment, compared to the color of the body layer110, the color of the surface layer112is darker; in another embodiment, when the polishing layer100is a porous material, the pore count per volume is fewer in the surface layer112, and the pore count per volume is more in the body layer110. More specifically, in the present embodiment, eight flattened regions U located on the top of one of the polishing portions106of the polishing layer100expose the body layer110. Specifically, in the present embodiment, the flattened regions U exposing the body layer110are located on the top of one of the polishing portions106which is in the peripheral region of the polishing layer100. Moreover, in the present embodiment, referring toFIG.3, a width “d” of the flattened regions U is less than a width “D” of the top of the corresponding polishing portion106. The flattened regions U shown inFIG.2andFIG.3are all located in the central location of the top of the polishing portions106, but the invention is not limited thereto, and the flattened regions U can also be located in the edge location of the top of the polishing portions106. From another perspective, as described above, the surface layer112is disposed on the surface of the body layer110, and the flattened regions U on the top of the polishing portions106expose the body layer110, such that the polishing surface PS is substantially formed by the surface layer112and the body layer110. In other words, when a polishing process is performed on an object using the polishing layer100, the object is in contact with the surface layer112and also in contact with the body layer110. However, a break-in process performed before the use of polishing pad can remove the surface layer112disposed on the top surface of polishing layer100, so that the contact surface between the polishing layer100of polishing pad and the object is more uniform during the polishing process. In other words, the polishing surface PS shows that the surface layer112is only remained at a side of the polishing portions106as the polishing layer100is worn in the polishing process. Moreover, in the embodiments ofFIG.2andFIG.3, the top of the outermost polishing portion106has eight flattened regions U, but the invention does not limit the location and quantity of the flattened regions U. Based on actual process conditions, the polishing layer100only needs to have at least one flattened region U. In other embodiments, the flattened regions U can also be located on the top of all of the polishing portions106, as shown inFIG.8. Moreover, in the embodiments ofFIG.2andFIG.3, although the flattened regions U exposing the body layer110are all located in the peripheral region of the polishing layer100, the invention is not limited thereto. In an embodiment, the method of manufacturing the polishing layer100is, for instance, forming in a mold using compression molding, and at this point, the flattened regions U can be located in the peripheral region of the polishing layer100(as shown inFIG.2andFIG.3). In another embodiment, the method of manufacturing the polishing layer100is, for instance, injecting the polymer material forming the polishing layer100in a mold using a perfusion method, and at this point, the flattened regions U can also be located at the end E of the perfusion flow mark M, as shown inFIG.7. For instance, if the perfusion hole is in the center of the mold such that the polymer material forming the polishing layer100flows from the center of the mold to the periphery of the mold for perfusion, then the flattened regions U can be located in the peripheral region of the polishing layer100(as shown inFIG.2andFIG.3). As another example, as shown inFIG.7, if the perfusion hole H is at a periphery of the mold (i.e., perfusion periphery) such that the polymer material forming the polishing layer100flows from the perfusion periphery of the mold to the opposite end periphery for perfusion, then the flattened regions U can be located in the end peripheral region of the polishing layer100, and the distribution of the flattened regions U in the end peripheral region is, for instance, a fan-shaped distribution or a semicircular distribution, but the invention is not limited thereto. The flattened regions U shown inFIG.2andFIG.3are all located in the central location of the top of the polishing portions106, but the invention is not limited thereto, and the flattened regions U can also be, for instance, located in the edge location of the top of the polishing portions106, and can further be, for instance, located in the edge portion of the top of the polishing portions106which is located in the end peripheral region mentioned above. Moreover, the shape of all of the flattened regions U can be a dot, such as: a circular dot shown inFIG.2andFIG.3, or a triangle point, square dot, hexagonal dot, or other dot shapes, but the invention is not limited thereto. In other embodiments, the shape of the flattened regions U can also be a strip. In the case of the circular groove104ofFIG.2andFIG.3, the flattened regions U are, for instance, elongated arcs, and grooves of other shapes can have suitable strip shapes, but the invention is not limited thereto. Moreover, in the embodiments ofFIG.2andFIG.3, the distributions of all of the grooves104are concentric circles, and the cross section along the direction is along the radius direction, but the invention is not limited thereto. In other embodiments, the distributions of the grooves104can also be eccentric circles, ovals, polygonal rings, spiral rings, irregular rings, parallel lines, radiation, radiation arcs, spirals, dots, XY lattices, polygonal lattices, irregular shapes, or a combination thereof, but the invention is not limited thereto. The cross section along the direction can be, for instance, parallel to the X-axis direction, parallel to the Y-axis direction, a direction at an angle with the X-axis direction, a radius direction, a circumferential direction, or a combination thereof, but the invention is not limited thereto. In the following, to more clearly describe the polishing layer100and the function thereof, the manufacturing method of the polishing layer100is described with reference toFIG.4andFIG.5AtoFIG.5C.FIG.4is a flow chart of a manufacturing method of a polishing layer of an embodiment of the invention.FIG.5AtoFIG.5Care cross sections of the manufacturing process of the polishing layer ofFIG.2along line I-I′. Similarly, as described above,FIG.5AtoFIG.5Care all respectively cross sections along the radius direction. First, referring to bothFIG.4andFIG.5A, step S10is performed to provide a mold200, wherein the mold200includes an upper die202, a lower die204, and a mold cavity C defined between the upper die202and the lower die204. The mold cavity C has a contour pattern F (i.e., the pattern of the lower surface of the upper die202), the contour pattern F faces the mold cavity C, and the cross section of the contour pattern F along the radius direction includes a plurality of recessions212and at least one concavity portion214. In the present embodiment, the contour pattern F of the mold cavity C is transferred to obtain the surface pattern102of the polishing layer100, and therefore the contour pattern F of the mold cavity C corresponds to the surface pattern102of the polishing layer100. More specifically, the recessions212of the contour pattern F of the mold cavity C correspond to the polishing portions106of the surface pattern102of the polishing layer100, and the concavity portions214of the contour pattern F of the mold cavity C correspond to the flattened regions U of the surface pattern102of the polishing layer100. More specifically, in the present embodiment, the concavity portions214are disposed in the peripheral region of the contour pattern F, the shape of all of the concavity portions214is a hole, and a width “w” of the concavity portions214is less than a width “W” of the bottom of the corresponding recession212. Moreover, in the present embodiment, the concavity portions214are disposed on the bottom of one of the recessions212. From another perspective, in the present embodiment, the bottom of the recessions212is coplanar and is not coplanar with the bottom of the concavity portions214. Next, referring toFIG.4andFIG.5A, step S12is performed to dispose a polymer material in the mold cavity C. Specifically, the polymer material is the main material forming the polishing layer100, and is, for instance, polyester, polyether, polyurethane, polycarbonate, polyacrylate, polybutadiene, or other polymer materials synthesized by a suitable thermosetting resin or thermoplastic resin, but the invention is not limited thereto. Moreover, in the present embodiment, the method of disposing the polymer material in the mold cavity C includes compression molding or a perfusion method. Specifically, in an embodiment, the method of compression modeling which disposes the polymer material in the mold cavity C for forming the polishing layer100includes: directly placing the polymer material in the lower die204of the mold200and then using the upper die202to apply pressure in the X direction on the polymer material which is placed in the lower die204. At this point, the polymer material is driven by pressure to move toward the peripheral region of the mold cavity C. The perfusion method is another embodiment that injects the polymer material in the mold cavity C to form the polishing layer100, and the concavity portions214are disposed at the end of the flow field during the perfusion of the polymer material. For instance, if the perfusion hole is in the center of the mold such that the polymer material forming the polishing layer100flows from the center of the mold200to the periphery for perfusion, then the concavity portions214can be located in the peripheral region of the contour pattern F (as shown inFIG.5A). As another example, if the perfusion hole is at a periphery of the mold200(i.e., perfusion periphery) such that the polymer material forming the polishing layer100flows from the perfusion periphery of the mold200to the opposite end periphery for perfusion, then the concavity portions214can be located in the end peripheral region of the mold200, and the distribution of the concavity portions214in the end peripheral region is, for instance, a fan-shaped distribution or a semicircular distribution, but the invention is not limited thereto. More specifically, in the present embodiment, a polymer material is disposed in the mold cavity C having the contour pattern F, such that the polishing layer100does not have void defects caused by bubbles. The reasons are as follows: as described above, the concavity portions214disposed on the bottom of the recessions212are in the peripheral region of the contour pattern F of the mold cavity C or at the flow field end, such that regardless of whether the polymer material is disposed in the mold cavity C using a perfusion method or compression molding, due to the subjected pressure or inherent flow properties, the polymer material can be filled in the concavity portions214to push air or gas generated by the polymer material itself into the concavity portions214to prevent the issue of a void defect V caused by air or gas generated by the polymer material itself remaining between the mold10and the polymer material in the form of a bubble B in the prior art. Next, referring to all ofFIG.4,FIG.5A, andFIG.5B, step S14is performed to cure the polymer material to form a semifinished product S. Specifically, in step S14, after the polymer material is cured, a mold-release step can be further performed to obtain the semifinished product S as shown inFIG.5B. Moreover, in the present embodiment, the method of curing the polymer material includes, for instance, performing a heat treatment. More specifically, referring to bothFIG.5AandFIG.5B, in the present embodiment, the semifinished product S has a surface pattern220corresponding to the contour pattern F of the mold cavity C. Specifically, in the present embodiment, the surface pattern220includes a plurality of polishing portions106corresponding to the recessions212, grooves104disposed between every two adjacent polishing portions106, and protruding portions222corresponding to the concavity portions214, wherein the protruding portions222are the portions that the polymer material filled in the concavity portions214due to subjected pressure or inherent flow properties in step S14. More specifically, in the present embodiment, since the protruding portions222and the concavity portions214correspond to one another, and the polishing portions106and the recessions212correspond to one another, based on the above, the quantity of the protruding portions222is eight; the protruding portions222are located in the peripheral region of surface pattern220or on the top of one of the polishing portions106which is in the flow field end; the shape of the protruding portions222is a dot; and a width “z” of the protruding portions222is less than a width “D” of the top of the corresponding polishing portions106. From another perspective, in the present embodiment, the semifinished product S includes a body layer110and a surface layer112disposed on the surface of the body layer110. Specifically, in the present embodiment, the body layer110and the surface layer112are formed by the same polymer material, but some differences exist between the body layer110and the surface layer112. During the curing process of the polymer material, the surface of the semifinished product S directly contacts with the mold200or the space of the concavity portions214such that some differences exist between the portion of the adjacent surface of the semifinished product S and the other portions, wherein the portion of the adjacent surface forms the surface layer112, and the other portions form the body layer110. For instance, in an embodiment, compared to the color of the body layer110, the color of the surface layer112is darker; in another embodiment, when the polishing layer100is a porous material, the pore count per volume is fewer in the surface layer112, and the pore count per volume is more in the body layer110. It should be mentioned that, based on the above, those having ordinary skill in the art should understand that, in the manufacturing method of the polishing layer100of the invention, the concavity portions214can be used to accommodate air or gas generated by the polymer material itself, and whether air or gas generated by the polymer material itself can be successfully pushed into the concavity portions214is one of the key factors to prevent void defects caused by the bubbles in the polishing layer, and therefore based on actual manufacturing conditions, to prevent void defects caused by the bubbles of the polishing layer, the design of the concavity portions214of the contour pattern F can be adjusted. Accordingly, as described above, in the present embodiment, the contour pattern F of the mold cavity C has eight concavity portions214, but the invention does not limit the quantity of the concavity portions214, and based on actual manufacturing conditions, the contour pattern F has at least one concavity portion214; the width “w” of the concavity portions214is less than the width “W” of the bottom of the corresponding recessions212, but the invention does not limit the width w of the concavity portions214. In other embodiments, the width “w” of the concavity portions214can also be equal to the width “W” of the top of the corresponding recessions212; and the shape of all of the concavity portions214is a hole, but the invention does not limit the shape of the concavity portions214, and in other embodiments, the shape of the concavity portions214can also be a long groove. More specifically, as described above, since the protruding portions222and the concavity portions214correspond to one another, similarly, the invention does not limit the shape of the protruding portions222, and in other embodiments, the shape of the protruding portions222can also be a strip; the invention does not limit the quantity of the protruding portions222, and in other embodiments, based on actual manufacturing conditions, the surface pattern220only needs to have at least one protruding portion222; the invention does not limit the width “z” of the protruding portions222, and in other embodiments, the width “z” of the protruding portions222can also be equal to the width “D” of the top of the corresponding polishing portions106. Next, referring to bothFIG.4andFIG.5C, step S16is performed to perform a flattening process on the semifinished product S to remove the protruding portions222and complete the manufacture of the polishing layer100. Specifically, in step S16, after the protruding portions222are removed, flattened regions U exposing the body layer110are formed on the top of the polishing portions106corresponding to the protruding portions222such that the flattened regions U are coplanar with the polishing surface PS, and the polishing layer100has a flat polishing surface PS. It should be mentioned that,FIG.5CisFIG.3. The structure, functions and so on of the polishing layer100are described in detail with reference toFIG.2andFIG.3mentioned above and are therefore not repeated herein. Moreover, in the present embodiment, the flattening process includes, for instance, mechanical cutting, chemical etching, laser processing, or abrasion, but the invention is not limited thereto. It should be mentioned that, in the present embodiment, by manufacturing the polishing layer100using the mold200having the contour pattern F, the polishing surface PS of the polishing layer100can prevent void defects caused by bubbles. Specifically, as described above, via the concavity portions214disposed on the bottom of the recessions212in the peripheral region of the contour pattern F or the flow field end, air or gas generated by the polymer material itself present in the manufacturing process can be pushed into the concavity portions214by the polymer material subsequently forming the protruding portions222, such that the possibility of forming void defects is excluded. More specifically, the flattening process is used to remove protruding portions222, so that the polishing layer100can have a flat polishing surface PS. It should be mentioned that, in the present embodiment, inFIG.5A, the upper die202having the contour pattern F has a die body202aand a molding die202bdisposed below the die body202a. In other words, in the present embodiment, the mold cavity C is defined by the molding die202band the lower die204. Since the upper die202has the die body202aand the molding die202b, molding dies having different contour patterns can be used in correspondence to different forms of the polishing layer as desired. However, the invention is not limited thereto. In other embodiments, the contour pattern F in the mold cavity C can also be located at the lower die204instead of being located at the upper die202, and the lower die204can also have a die body and a molding die, but the invention is not limited thereto. The mold200can also be without the molding die202band be replaced by the integral upper die202or lower die204having the contour pattern F. In other embodiments, the mold200can include a patternless upper die202and lower die204, and a patterned material layer (formed by a polymer material for instance) disposed on one of the upper die202and the lower die204, wherein a surface of the patterned material layer has a contour pattern F facing the mold cavity C, and the patterned material layer and an injection material layer molded in the mold cavity C are combined into a semifinished product of polishing layer. However, the semifinished product of polishing layer does not have a groove, and a groove needs to be formed in the semifinished product of polishing layer in a subsequent process. There is at least one difference exists between the patterned material layer and the injection material layer, so that the semifinished product of polishing layer has composite material properties such as water permeability, porosity, pore size, pore density, hydrophobicity, hardness, density, compression ratio, modulus, ductility, consumption rate, or roughness, but the invention is not limited thereto. Subsequent processing can include performing a flattening process on the other surface of the patterned material layer without the contour pattern F to remove a partial thickness of the patterned material layer and to expose the injection material layer. The patterned material layer becomes a separate damascene material layer embedded in the injection material layer, and the flattening process can remove the protruding portion formed in the injection material layer at the same time, and then grooves are formed. In the present embodiment, the forming method of the patterned material layer includes mechanical method, chemical method, laser processing method, imprinting method, stamping method, or a combination thereof, and the methods mentioned above are used to pattern the patterned material layer disposed on the upper die202or the lower die204. The flattening process includes, for instance, mechanical cutting, chemical etching, laser processing, or abrasion, but the invention is not limited thereto. It should be mentioned that, if a bigger void defect is generated on the polishing layer100in the manufacturing process, then the slurry or water used in the polishing process may penetrate beneath the polishing layer100, such that the polishing layer100and the adhesive layer or base layer disposed below the polishing layer100are delaminated, and the life of the polishing pad is significantly affected as a result. In an embodiment of the invention, a waterproof layer can be further added at any interface between the polishing layer100, the adhesive layer, and the base layer to prevent or reduce the slurry or water used during the polishing process from penetrating beneath the polishing layer100and affecting the life of the polishing pad. The material of the waterproof layer can be, for instance, acrylic, epoxy resin, rubber, or polyurethane, and can use a method such as blade coating, press coating, spray coating, or spin coating to combine waterproof layer with the adhesive layer and base layer below the polishing layer100. Moreover, a hot melt adhesive film, fiber layer (such as woven fabric or nonwoven fabric), polymer film inner folder fiber layer, metal-containing film, or a combination thereof can also be bonded below the polishing layer100for the waterproof layer, and the bonding method can be a fusion method or the polishing layer100material can be directly cured and adhered on the waterproof layer, but the invention is not limited thereto. The generation of void defects can be prevented in the manufacturing process of the polishing layer100, and the polishing pad of the invention can further have a better polishing pad life with the waterproof layer. FIG.6is a flow chart of a polishing method of an embodiment of the invention. The polishing method is suitable for polishing an object. Specifically, the polishing method can be applied to a polishing process for manufacturing an industrial device, such as an application in a device in the electronics industry such as a semiconductor, integrated circuit, microelectromechanics, energy conversion, communication, optic, storage disk, and display. An object used for manufacturing the devices can include, for instance, a semiconductor wafer, Group III-V wafer, storage device carrier, ceramic substrate, polymer substrate, and glass substrate, but the invention is not limited thereto. Referring toFIG.6, first, step S20is performed to provide a polishing pad. Specifically, in the present embodiment, the polishing pad includes the polishing layer100in any embodiment mentioned above. Relevant descriptions of the polishing layer100are as provided in detail in the above embodiments, and are therefore not repeated herein. It should be mentioned that, in the present embodiment, a base layer, a waterproof layer, an adhesive layer, or a combination thereof can be disposed below the polishing layer100in the polishing pad. Next, step S22is performed to apply a pressure to an object such that the object is pressed on the polishing pad and in contact with the polishing pad. Specifically, as described above, the object is in contact with the polishing surface PS in the polishing layer100. Moreover, the method of applying pressure to the object is to use a carrier which can hold the object. Next, step S24is performed to provide relative motion to the object and the polishing pad to perform a polishing process on the object using the polishing pad to achieve the goal of planarization. Specifically, the method of providing relative motion to the object and the polishing pad includes, for instance, rotating the polishing pad fixed on the platen via the rotation of the platen. It should be mentioned that, the polishing layer100is manufactured by the mold200having the contour pattern F of the mold cavity C and the at least one concavity portion214is disposed on the bottom of at least one of the recessions212. The polishing layer100in any embodiment above does not have void defects caused by bubbles. Besides, the polishing layer100has a flat polishing surface PS and a novel structure, such that the resulting polishing pad can have a better polishing quality during the polishing process. Although the invention has been described with reference to the above embodiments, it will be apparent to one of ordinary skill in the art that modifications to the described embodiments may be made without departing from the spirit of the invention. Accordingly, the scope of the invention is defined by the attached claims not by the above detailed descriptions. | 28,957 |
11858090 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT A preferred embodiment of the present invention will be described in detail below with reference to the accompanying drawings.FIG.1schematically illustrates in perspective by way of example a grinding apparatus according to the embodiment of the present invention. InFIG.1, X-axis directions (forward and rearward directions) and Y-axis directions (leftward and rightward directions) represent respective directions that extend perpendicularly to each other in a horizontal plane, and Z-axis directions (upward and downward directions) represent directions perpendicular to the X-axis directions and the Y-axis directions, i.e., vertical directions. The grinding apparatus, denoted by2inFIG.1, includes a base4supporting thereon various components thereof. The base4has an opening4adefined in a front end of an upper surface thereof and housing a delivery mechanism6therein. The delivery mechanism6is, for example, a robot arm having a plurality of joints.FIG.2schematically illustrates in perspective by way of example a workpiece unit that is delivered by the delivery mechanism6. The workpiece unit11illustrated inFIG.2has a disk-shaped workpiece13. The workpiece13is, for example, a wafer made of a semiconductor material such as silicon (Si). The workpiece13has a side of a face surface13aincluding a plurality of areas demarcated by a plurality of intersecting projecting dicing lines15, with devices17such as ISs or LSI circuits formed in the respective areas. The workpiece13is not limited to any particular materials, shapes, structures, sizes, etc. The workpiece13may be made of materials including other semiconductor materials, ceramic, resin, metal, etc. Similarly, the devices17are not limited to any particular kinds, quantities, shapes, structures, sizes, layouts, etc. The workpiece unit11also has a film-like tape19that is affixed to the face surface13aof the workpiece13and generally equal in diameter to the workpiece13. The tape19is made of resin, for example, and protects the devices17by softening shocks that are applied to the side of the face surface13awhen a side of a reverse surface13bof the workpiece13is ground. According to the present embodiment, while a side of one surface11aof the workpiece unit11, i.e., a surface19aof the tape19that is not affixed to the workpiece13, is being held in position, a side of another surface11bof the workpiece unit11, i.e., the reverse surface13bof the workpiece13, is ground. As illustrated inFIG.1, two cassette rest bases10aand10bfor placing respective cassettes8aand8bthat house workpiece units11therein are mounted on the front end of the base4forwardly of the opening4a. The delivery mechanism6is able to not only hold and deliver the workpiece unit11but also turn the workpiece unit11upside down. A measuring unit, i.e., a first measuring unit,12that is used to calculate a value of the thickness of the workpiece unit11held by the delivery mechanism6is mounted on the base4behind the opening4a.FIG.3schematically illustrates the measuring unit12in side elevation. As illustrated inFIG.3, the measuring unit12includes a support14shaped as a quadrangular prism extending along the Z-axis directions and an upper surface measuring device16and a lower surface measuring device18that are fixed to a front surface of the support14that faces the delivery mechanism6and each shaped as a quadrangular prism extending along the X-axis directions. The upper surface measuring device16and the lower surface measuring device18are spaced from each other along the Z-axis directions. Each of the upper surface measuring device16and the lower surface measuring device18is a non-contact-type distance measuring device for measuring the distance up to a measurand, i.e., an object to be measured, using a laser beam. Specifically, the upper surface measuring device16includes a light emitter16afor emitting a laser beam downwardly toward the upper surface of a measurand, e.g., the other surface11bof the workpiece unit11, and a light detector16bfor detecting a laser beam reflected from the upper surface of the measurand, e.g., the other surface11bof the workpiece unit11. The upper surface measuring device16measures the distance up to the upper surface of the measurand on the basis of a phase difference or the like between the emitted laser beam and the reflected laser beam. Similarly, the lower surface measuring device18includes a light emitter18afor emitting a laser beam upwardly toward the lower surface of a measurand, e.g., the one surface11aof the workpiece unit11, and a light detector18bfor detecting a laser beam reflected from the upper surface of the measurand, e.g., the one surface11aof the workpiece unit11. The lower surface measuring device18measures the distance up to the lower surface of the measurand on the basis of a phase difference or the like between the emitted laser beam and the reflected laser beam. The light emitter16aincludes, for example, a light source for emitting light having a wavelength reflected by the upper surface of the measurand, e.g., the reverse surface13bof the workpiece13and a lens and/or a mirror for guiding the emitted light from the light source to the measurand. Similarly, the light emitter18aincludes, for example, a light source for emitting light having a wavelength reflected by the lower surface of the measurand, e.g., the surface19aof the tape19and a lens and/or a mirror for guiding the emitted light from the light source to the measurand. The wavelength of the light emitted from the light emitter16aand the wavelength of the light emitted from the light emitter18amay be different from each other. Each of the light detectors16band18bincludes, for example, a lens and/or a mirror for guiding the light reflected from the measurand and a light detecting element such as a complementary metal oxide semiconductor (CMOS) image sensor for detecting the reflected light guided by the lens and/or the mirror. As illustrated inFIG.1, the grinding apparatus2also includes a position adjusting mechanism20mounted on the base4obliquely behind the opening4a, i.e., sideways of the measuring unit12, for adjusting the position of the workpiece unit11delivered by the delivery mechanism6. The position adjusting mechanism20includes, for example, a disk-shaped table and a plurality of pins radially movably disposed around the table. The position adjusting mechanism20operates in the following manner. When the workpiece unit11is delivered from the cassette8aand placed on the table of the position adjusting mechanism20by the delivery mechanism6, the pins are moved radially inwardly into contact with an outer circumferential edge of the workpiece unit11, aligning the center of the workpiece unit11with a predetermined position in the X-axis directions and the Y-axis directions. According to the present embodiment, the workpiece unit11is placed on the table of the position adjusting mechanism20such that the reverse surface11bfaces upwardly. According to the present embodiment, furthermore, the workpiece unit11that is delivered from the cassette8aby the delivery mechanism6is introduced into the position adjusting mechanism20after an outer circumferential portion of the workpiece unit11is positioned between the upper surface measuring device16and the lower surface measuring device18of the measuring unit12and a value of the thickness of the workpiece unit11is calculated. A delivery mechanism22for holding the workpiece unit11picked up from the position adjusting mechanism20and delivering the workpiece unit11rearwardly is mounted on the base4behind the measuring unit12. The delivery mechanism22includes a holding pad for holding the side of the upper surface of the workpiece unit11, i.e., the reverse surface11bthereof according to the present embodiment, under suction, and an arm connected to the holding pad. The delivery mechanism22delivers the workpiece unit11that has been adjusted in position by the position adjusting mechanism20backwardly by turning the holding pad with the arm. A disk-shaped turntable24is mounted on the base4behind the delivery mechanism22. The turntable24is connected to a rotary actuator, not illustrated, such as an electric motor and is rotatable thereby about a rotational axis extending generally parallel to the Z-axis directions. The turntable24supports on its upper surface three chuck tables26for holding respective workpiece units11thereon. The chuck tables26are spaced apart by generally equal angular intervals along circumferential directions of the turntable24. Though the turntable24supports the three chuck tables26thereon according to the present embodiment, the number, etc. of chuck tables26supported on the turntable24is not limited according to the present invention. FIG.4schematically illustrates in plan the turntable24and peripheral structures of the turntable24by way of example. InFIG.4, some components are indicated by broken lines for illustrative purposes. The delivery mechanism22turns the arm to deliver the workpiece unit11held by the holding pad to one of the chuck tables26that is located in a loading/unloading area A (seeFIG.4) adjacent to the delivery mechanism22. The turntable24is rotated in a direction indicated by an arrow inFIGS.1and4to move each of the chuck tables26successively to the loading/unloading area A, a rough grinding area B, and a finish grinding area C. Each of the chuck tables26is connected to a rotary actuator, not illustrated, such as an electric motor and is rotatable thereby about a rotational axis extending generally parallel to the Z-axis directions. Each of the chuck tables26has a disk-shaped frame body made of a metal material such as stainless steel. The frame body has a recess defined in an upper surface thereof and having a circular opening in its upper end. A disk-shaped porous plate made of ceramic or the like is fixedly mounted in the recess. Each of the chuck tables26has an upper surface provided by the porous plate and shaped as a conical surface having its center protruding slightly upwardly beyond its outer edge. The conical upper surface functions as a holding surface26afor holding thereon the side of the lower surface of the workpiece unit11, i.e., the face surface11athereof according to the present embodiment. In other words, each of the chuck tables26has in its upper portion the holding surface26afor holding the workpiece unit11thereon. The holding surface26ais connected to a suction source, not illustrated, such as a vacuum pump through a suction channel, not illustrated, defined in the chuck table26. The workpiece unit11that has been placed on the chuck table26has its lower surface attracted to the holding surface26aby a negative pressure generated and applied to the holding surface26aby the vacuum source. As illustrated inFIG.1, two columnar support structures28are mounted on the base4behind the rough grinding area B and the finish grinding area C, respectively, i.e., behind the turntable24. A Z-axis moving mechanism30is mounted on a front surface of each of the columnar support structures28. The Z-axis moving mechanism30includes a pair of guide rails32lying generally parallel to each other and extending along the Z-axis directions, and a movable plate34slidably mounted on the guide rails32. A nut, not illustrated, of a ball screw is fixed to a rear surface, i.e., a reverse surface, of the movable plate34, and the screw shaft36that is extending generally parallel to the guide rails32is rotatably threaded through the nut. The screw shaft36has an end coupled to an electric motor38. When the electric motor38is energized, it rotates the screw shaft36, causing the nut to move the movable plate34in the Z-axis directions along the guide rails32. A spindle housing mount40is disposed on a front surface, i.e., a face surface, of each of the movable plates34. The spindle housing mount40supports thereon a grinding unit42for grinding the workpiece unit11. The grinding unit42includes a spindle housing44fixed to the spindle housing mount40. The spindle housing44houses a spindle46rotatably disposed therein that is rotatable about a rotational axis extending generally parallel to the Z-axis directions. The spindle46has a lower end portion exposed downwardly from a lower end face of the spindle housing44. A disk-shaped grinding wheel mount48is fixed to the exposed lower end portion of the spindle46. The grinding unit42on the support structure28disposed behind the rough grinding area B includes a first grinding wheel50afor rough grinding that is mounted on a lower surface of the grinding wheel mount48. The first grinding wheel50afor rough grinding includes a first wheel base made of metal such as stainless steel or aluminum and having generally the same diameter as the grinding wheel mount48. The first wheel base has a lower surface on which there are disposed a plurality of first grindstones arranged in an annular array that are made of abrasive grains of diamond or the like suitable for rough grinding that are bound together by a vitrified or resinoid bond. The spindle housing44of the grinding unit42behind the rough grinding area B houses therein a first rotary actuator, not illustrated, such as an electric motor that is connected to an upper end of the spindle46. When the first rotary actuator is energized, it rotates the spindle46and hence the first grinding wheel50aabout their rotational axes that are aligned with each other along the Z-axis directions. Near the first grinding wheel50a, there is positioned a grinding liquid supply nozzle, not illustrated, for supplying liquid, i.e., grinding liquid, such as pure water to a region, i.e., a processing point, where the workpiece unit11to be ground and the first grindstones are held in contact with each other. The grinding liquid supply nozzle may be replaced or combined with grinding liquid supply ports defined in the first grinding wheel50afor supplying liquid. Similarly, the grinding unit42on the support structure28disposed behind the finish grinding area C includes a second grinding wheel50bfor finish grinding that is mounted on a lower surface of the grinding wheel mount48. The second grinding wheel50bfor finish grinding includes a second wheel base made of metal such as stainless steel or aluminum and having generally the same diameter as the grinding wheel mount48. The second wheel base has a lower surface on which there are disposed a plurality of second grindstones arranged in an annular array that are made of abrasive grains of diamond or the like suitable for finish grinding that are bound together by a vitrified or resinoid bond. The spindle housing44of the grinding unit42behind the finish grinding area C houses therein a second rotary actuator, not illustrated, such as an electric motor that is connected to an upper end of the spindle46. When the second rotary actuator is energized, it rotates the spindle46and hence the second grinding wheel50babout their rotational axes that are aligned with each other along the Z-axis directions. Near the second grinding wheel50b, there is positioned a grinding liquid supply nozzle, not illustrated, for supplying liquid, i.e., grinding liquid, such as pure water to a region, i.e., a processing point, where the workpiece unit11to be ground and the second grindstones are held in contact with each other. The grinding liquid supply nozzle may be replaced or combined with grinding liquid supply ports defined in the second grinding wheel50bfor supplying liquid. Workpiece units11that are held on the respective chuck tables26are successively ground by the two grinding units42described above. Specifically, the workpiece unit11held on the chuck table26that has moved to the rough grinding area B is ground by the grinding unit42positioned near the rough grinding area B, whereas the workpiece unit11held on the chuck table26that has moved to the finish grinding area C is ground by the grinding unit42positioned near the finish grinding area C. As illustrated inFIG.1, a measuring unit, i.e., a second measuring unit,52that is used to calculate a value of the thickness of the workpiece unit11that has been finish-ground by the grinding unit42is disposed forwardly of the grinding unit42that is positioned near the finish grinding area C. The measuring unit52has a first measuring section whose tip end is capable of contacting an upper surface of a measurand, i.e., the reverse surface11bof the workpiece unit11according to the present embodiment, and a second measuring section whose tip end is capable of contacting the holding surface26aof the chuck table26. The first measuring section and the second measuring section each include a contact-type position (height) measuring device for measuring the position or height of the tip end thereof in vertical directions. Stated otherwise, the second measuring unit52includes two contact-type position (height) measuring devices. A delivery mechanism54for holding the workpiece unit11that has been ground and delivering the workpiece unit11forwardly is mounted on the base4forwardly of the loading/unloading area A and sideways of the delivery mechanism22. The delivery mechanism54includes a holding pad for holding the side of the upper surface of the workpiece unit11, i.e., the reverse surface11bthereof according to the present embodiment, under suction, and an arm connected to the holding pad. The delivery mechanism54delivers the workpiece unit11that has been ground forwardly from the chuck table26by turning the holding pad with the arm. A cleaning unit56for cleaning the workpiece unit11delivered by the delivery mechanism54is disposed in front of the delivery mechanism54. The cleaning unit56includes, for example, a spinner table rotatable while holding thereon the side of the lower surface of the workpiece unit11, i.e., the face surface11athereof according to the present embodiment, and a nozzle for ejecting a cleaning fluid to the side of the upper surface of the workpiece unit11, i.e., the reverse surface11bthereof according to the present embodiment, held on the spinner table. The workpiece unit11that has been cleaned by the cleaning unit56is delivered by the delivery mechanism6. For example, the workpiece unit11is delivered from the cleaning unit56into the cassette8bby the delivery mechanism6after an outer circumferential portion of the workpiece unit11has been positioned between the upper surface measuring device16and the lower surface measuring device18of the measuring unit12and a value of the thickness of the workpiece unit11has been calculated. Alternatively, the workpiece unit11may be delivered from the cleaning unit56directly into the cassette8bby the delivery mechanism6. In addition, the grinding apparatus2may include other components than those described above. For example, the grinding apparatus2may include a touch panel having a touch sensor for entering instructions from an operator into the grinding apparatus2and a display for outputting various pieces of information for the operator to see. Operation of the components of the grinding apparatus2is controlled by a control unit included in the grinding apparatus2.FIG.5schematically illustrates in block form the control unit, denoted by58, included in the grinding apparatus2by way of example. As illustrated inFIG.5, the control unit58includes, for example, a processing section60for generating signals for controlling the components of the grinding apparatus2and a storage section62for storing various pieces of information, such as data and programs, for use in the processing section60. For instance, the storage section62stores, in advance, values of the thicknesses of workpieces13and tapes19of workpiece units11as targets to be ground by the grinding apparatus2and a value of an interval between the upper surface measuring device16and the lower surface measuring device18of the measuring unit12. The processing section60has functions performed by a central processing unit (CPU) or the like that reads and executes programs stored in the storage section62. The storage section62has functions performed by at least one of semiconductor memories such as a dynamic random access memory (DRAM), a static random access memory (SRAM), and a NAND-type flash memory and a magnetic storage device such as a hard disk drive (HDD). The processing section60includes a delivering section64, a measuring section66, a grinding section68, and a deciding section70as functional sections. These functional sections perform independent processing sequences at different times or simultaneously. The processing section60may have other functional sections than the delivering section64, the measuring section66, the grinding section68, and the deciding section70. For example, the processing section60may have a displaying section for controlling displaying operation of the display that is a component of the touch panel. The delivering section64controls operation of the delivery mechanism6, the delivery mechanism22, and the delivery mechanism54. For example, the delivering section64controls operation of the delivery mechanism6in order to position an outer circumferential portion of the workpiece unit11in a position, i.e., a measuring position, between the upper surface measuring device16and the lower surface measuring device18of the measuring unit12. The measuring section66controls operation of the measuring unit12and the measuring unit52. For example, the measuring section66controls operation of the measuring unit12in order to measure a distance, i.e., a first distance, between the upper surface of the workpiece unit11whose outer circumferential portion has been positioned in the measuring position and the upper surface measuring device16and also to measure a distance, i.e., a second distance, between the lower surface of this workpiece unit11and the lower surface measuring device18. The grinding section68controls operation of the turntable24, the chuck tables26, the grinding units42, and components related to them. For example, the grinding section68controls operation of the turntable24, the chuck tables26, the grinding units42, and the related components in order to grind the workpiece units11held on the respective chuck tables26. The deciding section70calculates a value of the thickness of the workpiece unit11on the basis of results of the measurement performed by the measuring unit12. For example, the deciding section70calculates a value of the thickness of the workpiece unit11by subtracting the first distance and the second distance from the value of the interval, stored in the storage section62, between the upper surface measuring device16and the lower surface measuring device18of the measuring unit12. Moreover, the deciding section70decides whether the workpiece unit11is a target to be ground or not on the basis of the value of the thickness of the workpiece unit11that is obtained from the results of the measurement performed by the measuring unit12. For example, the deciding section70decides whether a desired tape19has been affixed to the workpiece13or not by comparing the sum of the values of the thicknesses, stored in the storage section62, of the workpiece13and the tape19and the calculated value of the thickness of the workpiece unit11with each other. If the deciding section70decides that a desired tape19has been affixed to the workpiece13, then the deciding section70determines that the workpiece unit11is a target to be ground. If the deciding section70decides that a desired tape19has not been affixed to the workpiece13, then the deciding section70determines that the workpiece unit11is not a target to be ground. FIG.6is a flowchart schematically illustrating an example of a processing sequence of a method of driving the grinding apparatus2. According to the method of driving the grinding apparatus2as illustrated inFIG.6, first, the delivery mechanism6delivers the workpiece unit11out of the cassette8aplaced on the cassette rest base10a(unloading step: S1). Then, the delivery mechanism6positions the workpiece unit11in the measuring position referred to above. Then, the measuring unit12measures values to be used to calculate a value of the thickness of the workpiece unit11, i.e., the first distance and the second distance referred to above (measuring step: S2). Thereafter, the deciding section70of the control unit58calculates a value of the thickness of the workpiece unit11, as described above. Then, the deciding section70decides whether the workpiece unit11is a target to be ground or not on the basis of the value of the thickness of the workpiece unit (deciding step: S3). Specifically, the deciding section70decides whether the workpiece unit11is a target to be ground or not on the basis of whether a desired tape19has been affixed to the workpiece13or not, as described above. If the deciding section70determines that the workpiece unit11is a target to be ground (S3: YES), then the workpiece unit11is delivered to the chuck table26(loading step: S4). Specifically, the delivery mechanism6delivers the workpiece unit11to the position adjusting mechanism20, and the delivery mechanism22delivers the workpiece unit11that has been adjusted in position by the position adjusting mechanism20from the position adjusting mechanism20, and then delivers the workpiece unit11to the chuck table26. When the workpiece unit11is delivered to and placed on the chuck table26, the workpiece unit11is ground (grinding step: S5). If the deciding section70determines that the workpiece unit11is not a target to be ground (S3: NO), then the workpiece unit11is delivered back into the cassette8a(loading step: S6). According to the grinding apparatus2and the method of driving the grinding apparatus2, as described above, a value of the thickness of the workpiece unit11can be recognized after the workpiece unit11has been delivered out of the cassette8aand before the workpiece unit11is delivered to the chuck table26, on the basis of the results of the measurement performed by the measuring unit12. The value of the thickness of the workpiece unit11represents the sum of the values of the thicknesses of the workpiece13and the tape19affixed thereto. Consequently, it is possible to decide whether the tape19affixed to the workpiece13has a desired thickness or not, i.e., is of a desire type or not, on the basis of the results of the measurement. Stated otherwise, it is possible to decide whether the workpiece unit11is a target to be ground or not before the workpiece unit11is delivered to the chuck table26. As a result, a workpiece13with an inadequate tape19affixed thereto is prevented from being ground. Furthermore, in the grinding apparatus2, it is possible to confirm whether the measuring unit, i.e., the second measuring unit,52is operating normally or not.FIG.7is a flowchart of an example of the processing sequence of the method of driving the grinding apparatus2for confirming whether the second measuring unit52is operating normally or not. According to the method of driving the grinding apparatus2as illustrated inFIG.7, first, the workpiece unit11held on the chuck table26is ground (grinding step: S11). Then, the measuring unit52measures values to be used to calculate a value of the thickness of the workpiece unit11held on the chuck table26, i.e., the vertical position, i.e., the height, of the upper surface of the workpiece unit11and the vertical position, i.e., the height, of the holding surface26aof the chuck table26(measuring step: S12). Then, the deciding section70calculates a value of the thickness of the workpiece unit11on the basis of the results of the measurement performed by the measuring unit52. Specifically, the deciding section70calculates the difference between the vertical position, i.e., the height, of the upper surface of the workpiece unit11and the vertical position, i.e., the height, of the holding surface26aof the chuck table26, as a value of the thickness of the workpiece unit11. The measuring unit52may measure the values continuously from a time before the workpiece unit11is ground to a time after the workpiece unit11has been ground. In such a case, the display as a component of the touch panel may be controlled to indicate the thickness of the workpiece unit11being ground to the operator at any time. Then, the delivery mechanism54delivers the workpiece unit11from the chuck table26(unloading step: S13). Specifically, the delivery mechanism54delivers the workpiece unit11from the chuck table26to the cleaning unit56, and the delivery mechanism6delivers the workpiece unit11cleaned by the cleaning unit56from the cleaning unit56into the measuring position referred to above. Then, the measuring unit, i.e., the first measuring unit,12measures values to be used to calculate a value of the thickness of the workpiece unit11, i.e., the first distance and the second distance referred to above (measuring step: S14). Thereafter, the deciding section70of the control unit58calculates a value of the thickness of the workpiece unit11, as described above. Then, the deciding section70decides whether the measuring unit52is operating normally or not by comparing the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit52and the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit12(deciding step: S15). For example, if the compared values are equal to each other, then the deciding section70determines that the measuring unit52is operating normally, and if the compared values are different from each other, then the deciding section70determines that the measuring unit52is not operating normally. Alternatively, if the difference between the compared values is equal to or smaller than a predetermined threshold value, then the deciding section70may determine that the measuring unit52is operating normally, and if the difference between the compared values exceeds the predetermined threshold value, then the deciding section70may determine that the measuring unit52is not operating normally. The storage section62may store the threshold value in advance. If the deciding section70determines that the measuring unit52is not operating normally (S15: NO), then that effect is indicated to the operator (indicating step: S16). For example, an error message, i.e., a message representing that the measuring unit52is not operating normally, is displayed on the display that is the component of the touch panel. Thereafter, the delivery mechanism6delivers the workpiece unit11into the cassette8bplaced on the cassette rest base10b(loading step: S17). On the other hand, if the deciding section70determines that the measuring unit52is operating normally (S15: YES), then no special processing is carried out, and the delivery mechanism6delivers the workpiece unit11into the cassette8bplaced on the cassette rest base10b(loading step: S17). Moreover, in the grinding apparatus2, it is also possible to confirm whether the measuring unit, i.e., the second measuring unit,52is operating normally or not prior to the grinding of the workpiece unit11.FIG.8is a flowchart of an example of the processing sequence of the method of driving the grinding apparatus2for confirming whether the second measuring unit52is operating normally or not prior to the grinding of the workpiece unit11. According to the method of driving the grinding apparatus2as illustrated inFIG.8, first, the delivery mechanism6delivers the workpiece unit11out of the cassette8aplaced on the cassette rest base10a(unloading step: S21). Then, the measuring unit, i.e., the first measuring unit,12measures values to be used to calculate a value of the thickness of the workpiece unit11, i.e., the first distance and the second distance referred to above (measuring step: S22). Thereafter, the deciding section70of the control unit58calculates a value of the thickness of the workpiece unit11, as described above. Then, the workpiece unit11is delivered to the chuck table26(loading step: S23). Specifically, the delivery mechanism6delivers the workpiece unit11to the position adjusting mechanism20, and the delivery mechanism22delivers the workpiece unit11that has been adjusted in position by the position adjusting mechanism20from the position adjusting mechanism20, and then delivers the workpiece unit11to the chuck table26. Then, the measuring unit, i.e., the second measuring unit,52measures values to be used to calculate a value of the thickness of the workpiece unit11held on the chuck table26, i.e., the vertical position, i.e., the height, of the upper surface of the workpiece unit11and the vertical position, i.e., the height, of the holding surface26aof the chuck table26(measuring step: S24). Then, the deciding section70calculates a value of the thickness of the workpiece unit11on the basis of the results of the measurement performed by the measuring unit52. Specifically, the deciding section70calculates the difference between the vertical position, i.e., the height, of the upper surface of the workpiece unit11and the vertical position, i.e., the height, of the holding surface26aof the chuck table26, as a value of the thickness of the workpiece unit11. Then, the deciding section70decides whether the measuring unit52is operating normally or not by comparing the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit52and the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit12(deciding step: S25). For example, if the compared values are equal to each other, then the deciding section70determines that the measuring unit52is operating normally, and if the compared values are different from each other, then the deciding section70determines that the measuring unit52is not operating normally. Alternatively, if the difference between the compared values is equal to or smaller than a predetermined threshold value, then the deciding section70may determine that the measuring unit52is operating normally, and if the difference between the compared values is in excess of the predetermined threshold value, then the deciding section70may determine that the measuring unit52is not operating normally. The storage section62may store the threshold value in advance. If the deciding section70determines that the measuring unit52is not operating normally (S25: NO), then that effect is indicated to the operator (indicating step: S26). For example, an error message, i.e., a message representing that the measuring unit52is not operating normally, is displayed on the display that is the component of the touch panel. According to the examples of the processing sequence of the method of driving the grinding apparatus2as illustrated inFIGS.7and8, it is confirmed whether the measuring unit52is operating normally or not by comparing the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit, i.e., the second measuring unit,52and the value of the thickness of the workpiece unit11obtained on the basis of the values of the measurement performed by the measuring unit, i.e., the first measuring unit,12. Therefore, a failure or a malfunction of the measuring unit52can be found at an early stage. It is thus possible to find at an early stage problems such as a processing defect due to an excess or lack of grinding on the workpiece unit11. Furthermore, it is confirmed using the measuring unit, i.e., the first measuring unit,12whether the workpiece unit11is a target to be ground or not and whether the measuring unit, i.e., the second measuring unit,52is operating normally or not. Consequently, the grinding apparatus2is preferable in terms of configurational simplicity to a grinding apparatus having individual components respectively for confirming whether the workpiece unit11is a target to be ground or not and whether the measuring unit52is operating normally or not. The grinding apparatus2merely represents an example of the grinding apparatus according to the present invention, and the grinding apparatus according to the present invention is not limited to the grinding apparatus2. For example, the measuring unit12and the measuring unit52may be replaced with a measuring unit for directly measuring the thickness of the workpiece unit11. Similarly, the measuring unit12may be replaced with a measuring unit for measuring a measurand while in contact therewith, and the measuring unit52may be replaced with a measuring unit for measuring a measurand while out of contact therewith, i.e., while not in contact therewith. In addition, the method of driving the grinding apparatus2as illustrated inFIGS.6through8merely represents examples of the method of driving the grinding apparatus according to the present invention. The method of driving the grinding apparatus according to the present invention is not limited to either of the examples of the method of driving the grinding apparatus2as illustrated inFIGS.6through8. For example, a method of driving the grinding apparatus which includes an any combination of the steps illustrated inFIGS.6through8also represents an example of the method of driving the grinding apparatus according to the present invention. The structural details, method details, etc. according to the above embodiment and modifications can be changed or modified without departing from the scope of the present invention. The present invention is not limited to the details of the above described preferred embodiment. The scope of the invention is defined by the appended claims and all changes and modifications as fall within the equivalence of the scope of the claims are therefore to be embraced by the invention. | 38,085 |
11858091 | DETAILED DESCRIPTION FOR CARRYING OUT THE INVENTION Generally stated, disclosed herein is an apparatus for recirculating fluids in the semiconductor industry. Further, methods using the apparatus for recirculating fluids in the semiconductor industry are disclosed. Referring to the drawings, wherein like reference numerals are used to indicate like or analogous components throughout the several views, and with particular reference toFIGS.1-23, there is illustrated an exemplary embodiment of an apparatus100for recirculating fluids, for example, in the semiconductor industry. The apparatus100may include a base portion110, an inlet portion130, a coupler150, and a nozzle member180. The inlet portion130may be coupled to a first end112of the base portion110by the coupler150. The nozzle member or nozzle portion180may be coupled to a second end114of the base portion110. When the base portion110, inlet portion130, and coupler150are attached together a passageway170is formed extending through the apparatus100. The base portion110may include a first portion116, a second portion118and a connector120coupling the first portion116to the second portion118. The first portion116may be, for example, longer than the second portion118, as described in greater detail below with reference toFIG.23. The connector120may be, for example, angled to position the first portion116at an angle with respect to the second portion118. The inlet portion130may include a first end132and a second end134that is connected to the coupler150. The inlet portion130may also include a first portion136, a second portion138, and a connector140positioned between the first portion136and the second portion138. The connector140may, for example, have a diameter smaller than the diameter of the first portion136and the diameter of the second portion138. The first portion136may be secured to a recirculation system, as described in greater detail below with reference toFIG.24. The first portion136may be, for example, tapered from the first end132to the connector140. The second portion138may be received within a portion of the coupler150. The second portion138may have, for example, a uniform diameter along the entire length of the second portion138. Although not shown, in alternative embodiments, the second portion138may have, for example, different diameters or a varying diameter along the length of the second portion138. The coupler150may include a first end152and a second end154that is coupled to the first portion116of the base portion110. The coupler150may also include a first portion156, a second portion158, and a connector160positioned between the first portion156and the second portion158. The connector160may have a first outer diameter, the first portion156may have a second outer diameter, and the second portion158may have a third outer diameter. In an embodiment, the first outer diameter may be smaller than the third outer diameter and the second outer diameter may be larger than the first and third outer diameters. In addition, the first outer diameter of the connector160may be approximately the same size as the inner diameter of the interior engagement portions of the first portion156and the second portion158. The interior engagement portions allow for the connector160to be inserted within the passageway of the first portion156and the second portion158while aligning the passageway of the connector160with the passageways of the first portion156and second portion158. The first portion156couples to the first end112of the base portion110. The second portion158engages the first portion116of the base portion110at the first end112. With continued reference toFIGS.1-13and as best seen inFIGS.14-22, the nozzle member180may include a first end182and a second end184. The nozzle member180may also include a base portion186, a nozzle portion188, and an inlet194. The nozzle portion188may, for example, extend away from the exterior surface of the base portion186between the first end182and the second end184of the nozzle member180. In the depicted embodiment, the nozzle portion188is positioned near the second end184of the base portion186. The nozzle portion188, for example, tapers as it extends away from the base portion186to a tip192. The nozzle portion188includes a helical channel or groove190extending from a position near the base portion186to a position near the tip192of the nozzle portion188. The helical groove190extends from an exterior surface through the nozzle portion188to an interior surface. The nozzle member180may further include an opening196extending through the interior of the nozzle member180. The first end182of the nozzle member180may have an outer diameter sized to be received within the inner diameter of the second portion118at the second end114of the base portion110. The inner diameter of the second portion118may include an interior engagement portion for receiving the first end182of the nozzle member180to align the interior passageway170of the base portion110with the interior surface of the opening196. As shown inFIGS.21and22, the inlet194engages the opening196extending through the base portion186. The opening196connects the inlet194to the helical groove190allowing fluid to mix as it passes through and out of the nozzle member180. In addition, the inlet194is aligned with and engages the passageway170to allow, for example, a slurry to pass through the inlet portion130, coupler150, and base portion110and into the nozzle member180. The nozzle portion188allows for, for example, an upward swirl of the slurry in a 360 degree or radial pattern. The upward swirl created by the nozzle portion188minimizes or eliminates the slurry shear caused by mixing or recirculating the slurry. In addition to the swirl created by the nozzle portion188, the angle between the first portion116and second portion118of the base portion110may also minimize or eliminate the slurry shear caused by mixing or recirculating the slurry. Referring now toFIG.23, the dimensions of portions of the apparatus100are shown. Further to the description of the apparatus100above, the first portion136of the inlet portion130may also include a first tool engagement portion210with a first engagement edge211. With continued reference toFIG.23, the connector120may include a connector midpoint200. The apparatus100may include a first length l1extending between the first engagement edge211and the connector midpoint200. The first length l1may range between, for example, approximately twenty (20) inches and approximately forty (40) inches. More specifically, the first length l1may range between approximately twenty-two (22) inches and approximately thirty-eight (38) inches. In some embodiments, the first length l1may be approximately twenty-three (23) inches, approximately thirty-one (31) inches, approximately thirty-two (32) inches, approximately thirty-five (35) inches, or approximately thirty-seven (37) inches. With continued reference toFIG.23, the first portion116of the base portion110may include a second length l2. The second length l2may extend between the first end112of the base portion110and a second end215of the first portion116. The second length l2may range between, for example, approximately fifteen (15) inches and approximately thirty-five (35). More specifically, the second length l2may range between approximately sixteen (16) inches and approximately thirty-two (32) inches. Still more specifically, the second length l2may be approximately seventeen (17) inches, approximately twenty-five (25) inches, approximately twenty-six (26) inches, approximately twenty-eight (28) inches, or approximately thirty-one (31) inches. The ratio between the first length l1and the second length l2(i.e., l1/l2) may range between, for example, approximately 1.1 to approximately 1.5. More specifically, the ratio between the first length l1and the second length l2(i.e., l1/l2) may range between approximately 1.2 to approximately 1.4. Still more specifically, the ratio between the first length l1and the second length l2(i.e., l1/l2) may be approximately 1.2, approximately 1.3, or approximately 1.4. As shown inFIG.23, the connector120may produce angle Ø between the first portion116and the second portion118. The angle Ø may range from, for example, approximately 90 degrees to approximately 160 degrees. More specifically, the angle Ø may range from, for example, approximately 120 degrees to approximately 150 degrees. Still more specifically, the angle Ø may be approximately 90 degrees, approximately 112 degrees, approximately 135 degrees, or approximately 157 degrees. With continued reference toFIG.23, the second portion118of the base portion110may include a second tool engagement portion205. The second tool engagement portion205may include a second engagement edge206. The apparatus100may include a third length l3extending between the second engagement edge206and the connector midpoint200. The third length l3may range between, for example, approximately two (2) inches and approximately four (4) inches. More specifically, the third length l3may be approximately two (2) inches, approximately 2.5 inches, approximately three (3) inches, approximately 3.5 inches, or approximately four (4) inches. A fourth length l4between the second end184of the nozzle portion180and the connector midpoint200is shown inFIG.23. The fourth length l4may range between, for example, approximately five (5) inches and approximately seven (7) inches. More specifically, the fourth length l4may be, for example, approximately five (5) inches, approximately 5.5 inches, approximately six (6) inches, approximately 6.5 inches, or approximately seven (7) inches. It is contemplated that some or all components of the apparatus100may be partially or entirely made with fluoropolymers, such as, perfluoroalkoxy alkanes (PFA), polytetrafluoroethylene (PTFE), fluorinated ethylene propylene (FEP), or alternative materials with like properties. The components of the apparatus100may, for example, all be made of only one material, each be made of a different material, each be made of a combination of material, or each be made either of only one material or a combination of materials. Although not shown, a mixing system may include more than one the apparatus100. For example, the mixing system may include a first apparatus100and a second apparatus100which each connect to an end of a recirculation line. In further embodiments, the mixing system may include any number of apparatus100coupled to the end of the recirculation line. Each apparatus100in the mixing system may have the same or a different length to the remaining apparatus100. A method of recirculating fluids is also disclosed and includes obtaining an apparatus100. The apparatus including a base portion110, an inlet portion130, a coupler150connecting the inlet portion130to the base portion110at a first end112, and a nozzle member180coupled to the base portion110at a second end114. The method may also include coupling the apparatus100to a recirculation system, such as shown inFIG.24. The method may further include passing a semiconductor slurry through the recirculation system and into a storage drum300. As shown inFIG.24, the recirculation system may include a recirculation loop301, which draws a slurry out of a storage drum300. The recirculation system may also include a pump320, a sample valve310, a rotameter305, and a pressure gauge315positioned along the recirculation loop301. The slurry may be drawn out of the storage drum300by the pump320. The slurry may then travel through the recirculation loop301and be deposited back into the storage drum300through apparatus100. As the slurry travels through the recirculation loop301, the slurry health and thoroughness of mixing may be checked by periodic sampling of the recirculating slurry via the sample valve310positioned within the recirculation loop301. In addition, the volumetric flow rate may be monitored by a rotameter305positioned within the recirculation loop301. Further, the flow pressure of the recirculation system may be monitored by a pressure gauge315, also positioned within the recirculation loop301. Although not shown, it is also contemplated that the apparatus100may include multiple second portions118each with a nozzle portion180to increase the mixing of the slurry. The method of recirculating fluids using the apparatus100to maintain semiconductor slurry health. Slurry health, as used herein, refers to the physical properties of the particles in the raw slurry or blended slurry. These include the particle counts by size (i.e. 200 nm, 500 nm, 1μ, 5μ, etc.), along with particle distribution (number of each particles in the size buckets in comparison to the total number of particles per unit volume), D50 also known as the mean particle size, maximum particle size, amount & type of agglomerates, amount and type of aggregates, and a few others. The majority of end users (CMP groups) find that practically, the particle size and distribution are the easiest to measure, and hence to correlate to defect in the wafer resulting from large particles, or too much fines (undersized particles), shifts in D50 or max particle size. These have been directly traced to defects in wafers and loss of revenue. Thus, the method of using the apparatus100utilizes the existing energy available in the recirculating raw slurry stream (as provided by the recirculating pump320) to mix the slurry in order to reduce or virtually eliminate shearing of the particles (changing distribution and creating fine particles). Since the slurries in use are constantly changing to meet market demand (for example, the latest iPhones and Galaxy's) the number of particles per unit volume has risen from 2-3 million/cc to 5-6 million/cc. These are sometimes also known as nano-slurries. Thus, the method as described above is designed to maintain the supplier's initial size and distribution characteristics. In another embodiment, the recirculation system with the apparatus100may be mounted on the top of a tank, for example, a 265 L tank with a conical bottom to maintain homogeneity. The tank may be, for example, a “day tank” from which other systems are fed the slurry. The apparatus100will assist with continuing to mix the slurry to maintain a homogeneous state of the slurry while in the “day tank.” When a “day tank” is used the length of the first portion116of the apparatus100may vary based on the size of the tank. In addition, the length of the second portion118of the apparatus100may also vary based on the size of the tank. For example, the larger the tank the longer the first portion116and the second portion118may be. In yet another embodiment, the recirculation system with at least one apparatus100may be mounted on top of a “day tank” which may be, for example, at least a 500 L tank with a conical bottom unit. The at least one apparatus100will assist with continuing to mix the slurry to maintain a homogeneous state of the slurry while in the “day tank.” The method of using at least one apparatus100mounted on the top of a “day tank” may include mixing in additional drums of slurry to the large tank to maintain a desired level in the tank. When additional drums of slurry are added to the large tank the at least one apparatus100allows for the new slurry to be mixed with the existing slurry to spread any minor variations between drums of slurry over a larger volume to significantly reduce the risk of dramatic changes in material, for example, particle size distribution, pH, density, and the like. The incorporation of any variations throughout the larger volume may allow for defects to be avoided or in the worst case make the issue minor enough that the wafer can be saved through a re-working process. The method of using a larger tank may include inserting more than one apparatus100into the tank in order to maintain the mixing in the larger tank. For example, for a 500 L tank, the recirculation line may be split and couple to two apparatus100providing for two nozzles188. The two nozzles188may be, for example, spaced 180° apart in order to maintain the mixing in the larger tank. In addition, the length of the two apparatus100may, for example, vary with one apparatus100being longer than the second apparatus100. With the two different length apparatus100, the method may include using both nozzles188when the tank is full and then turning off flow to at least one of the two nozzles188when the level of slurry in the tank drops below a specified level. The ability to adjust the number of nozzles188which the slurry is flowing through based on the level of slurry in the tank allows the user to avoid over mixing the slurry and preserve slurry health. In other large tanks, it is also contemplated that more than two apparatus100may be included to achieve the necessary mixing. For tanks with more than two apparatus100, the nozzles188may be, for example, radially spaced around the tank to achieve the maximum effect and desired mixing. In the embodiments with more than two nozzles188, the lengths of some of the apparatus100or all of the apparatus100may vary to allow for nozzles188to be turned off depending on the level of slurry in the tank. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has”, and “having”), “include” (and any form of include, such as “includes” and “including”), and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a method or device that “comprises,” “has,” “includes,” or “contains” one or more steps or elements possesses those one or more steps or elements, but is not limited to possessing only those one or more steps or elements. Likewise, a step of a method or an element of a device that “comprises,” “has,” “includes,” or “contains” one or more features possesses those one or more features, but is not limited to possessing only those one or more features. Furthermore, a device or structure that is configured in a certain way is configured in at least that way, but may also be configured in ways that are not listed. The invention has been described with reference to the preferred embodiments. It will be understood that the architectural and operational embodiments described herein are exemplary of a plurality of possible arrangements to provide the same general features, characteristics, and general system operation. Modifications and alterations will occur to others upon a reading and understanding of the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations. | 19,095 |
11858092 | DETAILED DESCRIPTION Hereinafter, exemplary embodiments will be described with reference to the accompanying drawings. In the specification and the drawings, parts having substantially same functions and configurations will be assigned same reference numerals, and redundant description thereof will be omitted. <Substrate Processing System> First, a configuration of a substrate processing system according to an exemplary embodiment will be described.FIG.1is a plan view schematically illustrating a configuration of a substrate processing system1. In the following, in order to clarify positional relationships, the X-axis, Y-axis and Z-axis which are orthogonal to each other will be defined. The positive Z-axis direction will be regarded as a vertically upward direction. In the substrate processing system1according to the present exemplary embodiment, a wafer Was a substrate, shown inFIG.2, is thinned. The wafer W is a semiconductor wafer such as, but not limited to, a silicon wafer or a compound semiconductor wafer. A device (not shown) is formed on a front surface W1of the wafer W, and a protective tape B for protecting the device is attached on the front surface W1. The wafer W is thinned as preset processings such as grinding and polishing are performed on a rear surface W2of the wafer W. The substrate processing system1includes a carry-in/out station2and a processing station3connected as a single body. The carry-in/out station2is configured as a carry-in/out section in which a cassette C, which is capable of accommodating therein a plurality of wafers W, is carried in/out from/to the outside. The processing station3is equipped with various kinds of processing apparatuses configured to perform preset processings on the wafer W. The carry-in/out station2is equipped with a cassette placing table10. In the shown example, the cassette placing table10is configured to be capable of holding a plurality of, for example, four cassettes C in series in the X-axis direction. Further, the carry-in/out station2includes a wafer transfer area20provided adjacent to the cassette placing table10. A wafer transfer device22configured to be movable on a transfer path21extending in the X-axis direction is provided in the wafer transfer area20. The wafer transfer device22is equipped with a transfer arm23configured to be movable in the horizontal direction and the vertical direction and pivotable around a horizontal axis and a vertical axis (θ direction), and is capable of transferring, with this transfer arm23, the wafers W between each cassette C on the cassette placing table10and respective apparatuses30and31of the processing station3to be described later. That is, the carry-in/out station2is configured to be capable of carrying the wafers W into/from the processing station3. Within the processing station3, the processing apparatus30configured to perform various processings such as grinding and polishing on the wafer W to thin the wafer W and the cleaning apparatus31configured to clean the wafer W processed by the processing apparatus30are arranged toward the positive X-axis direction from the negative X-axis direction. The processing apparatus30includes a turntable40, a transfer unit50, an alignment unit60, a cleaning unit70, a rough grinding unit80as a rough grinder, a fine grinding unit90as a finishing grinder, a gettering layer forming unit100, a tape thickness measuring unit110as a tape thickness measurer and a relative thickness measuring unit120as a relative thickness measurer. (Turntable) The turntable40is configured to be rotated by a rotating device (not shown). Four chucks200as substrate holders each configured to attract and hold the wafer W are provided on the turntable40. The chucks200are arranged on a circle concentric with the turntable40at a regular distance, that is, an angular distance of 90 degrees therebetween. The four chucks200can be moved to four processing positions P1to P4as the turntable40is rotated. In the present exemplary embodiment, the first processing position P1is a position at a positive X-axis and negative Y-axis side of the turntable40, and the cleaning unit70is disposed thereat. Further, the alignment unit60is disposed at a negative Y-axis side of the first processing position P1. The second processing position P2is a position at a positive X-axis and positive Y-axis side of the turntable40, and the rough grinding unit80is disposed thereat. The third processing position P3is a position at a negative X-axis and positive Y-axis side of the turntable40, and the fine grinding unit90is disposed thereat. The fourth processing position P4is a position at a negative X-axis and negative Y-axis side of the turntable40, and the gettering layer forming unit100is disposed thereat. (Chuck) As depicted inFIG.3, a front surface of each chuck200, that is, a holding surface of the wafer W has a protruding shape with a central portion protruding higher than an end portion thereof, when viewed from the side. In a grinding processing (rough grinding and fine grinding), a ¼ arc portion of a grinding whetstone280(290) to be described later comes into contact with the wafer W. Further, in the polishing processing, a ¼ arc portion of a polishing whetstone300to be described later comes into contact with the wafer W. The front surface of the chuck200is formed to have the protruding shape and the wafer W is attracted to conform to this front surface of the chuck200so that the wafer W is ground and polished into a uniform thickness. By way of example, a porous chuck is used as the chuck200. A porous201as a porous body having a multiple number of holes therein is provided on the surface of the chuck200. The porous201may be made of various kinds of materials as long as they are porous. By way of non-limiting example, the porous201may be made of carbon, alumina, silicon carbide, or the like. By suctioning the wafer W via the porous201with a suction mechanism (not shown), the wafer W is attracted to and held by the chuck200. The chuck200is held on a chuck table202. The chuck200and the chuck table202are supported on a base203. The base203is equipped with a rotating device204configured to rotate the chuck200, the chuck table202and the base203; and an adjusting device205as an adjuster configured to adjust an inclination of the chuck200, the chuck table202and the base203. The rotating device204is equipped with: a rotation shaft210configured to rotate the chuck200; a driving unit220configured to apply a rotational driving force when rotating the chuck200; and a driving force transmitter230configured to transmit the rotational driving force applied by the driving unit220to the rotation shaft210. The rotation shaft210is fixed at a central portion of a bottom of the base203. Further, the rotation shaft210is rotatably supported at a supporting table211. The chuck200is rotated around this rotation shaft210. The driving unit220is provided independently from the rotation shaft210. The driving unit220is equipped with a driving shaft221; and a motor222configured to rotate the driving shaft221. As shown inFIG.3andFIG.4AandFIG.4B, the driving force transmitter230includes a driven pulley231provided at the rotation shaft210, a driving pulley232provided at the driving shaft221and a belt233wound around the driven pulley231and the driving pulley232. The rotational driving force applied by the driving unit220is delivered to the rotation shaft210via the driving pulley232, the belt233and the driven pulley231. The driven pulley231is divided into an inner driven pulley231afixed at an outer surface of the rotation shaft210and an outer driven pulley231bprovided at an outside of the inner driven pulley231a. An inner magnet234as a first driving force transmitter is provided on an outer surface of the inner driven pulley231a, and an outer magnet235as a second driving force transmitter is provided on an inner surface of the outer driven pulley231b. A hollow portion236is formed between the inner magnet234and the outer magnet235. With this configuration, the driving force transmitter230transmits the rotational driving force by the driving unit220to the rotation shaft210through a non-contact type magnet drive mechanism. That is, the rotation shaft210at the driven side and the driving unit220at the driving side are separated and configured to be independent from each other. Further, by providing the hollow portion236as mentioned above, vibration and heat of the motor222are not delivered to the chuck200to affect it. In such a case, the wafer W held on the chuck200can be appropriately ground. As depicted inFIG.3andFIG.5, the adjusting device205includes a single fixed shaft240and two adjustment shafts241. The fixed shaft240and the adjustment shafts241are concentrically arranged at an outer peripheral portion of the base203at a regular distance therebetween. For example, a ball screw is used as the adjustment shaft241, and a motor242configured to rotate the adjustment shaft241is connected to the adjustment shaft241. The adjustment shaft241is moved in a vertical direction while being rotated by the motor242, thus moving the base203in the vertical direction. As the two adjustment shafts241are respectively moved in the vertical direction with respect to the fixed shaft240, an inclination of the chuck200is adjusted via the base203. Further, the number and the layout of the adjustment shafts241are not limited to the shown example as long as two or more adjustment shafts are provided. By way of example, the fixed shaft240may be omitted, and only the adjustment shafts241may be provided. Furthermore, the configuration of the adjusting device205is not limited to the shown example. Instead of the adjustment shaft241implemented by the ball screw and the motor242, a piezoelectric element, for example, may be used. As illustrated inFIG.6, for example, in case of adjusting the inclination of the chuck200by moving one adjustment shaft241adownwards and the other adjustment shaft241bupwards, the rotation shaft210is also inclined from the vertical direction. This inclination of the rotation shaft210is absorbed by the hollow portion236, and thus not delivered to the driving unit220. An operation of the rotating device204when the chuck200is tilted by the adjusting device205as stated above will be further explained in comparison with a conventional example.FIG.7AandFIG.7Bare diagrams schematically illustrating operations of rotating devices:FIG.7Aillustrates an operation of the rotating device204according to the present exemplary embodiment, andFIG.7Billustrates an operation of a rotating device500in the conventional example. As shown inFIG.7B, the rotating device500of the conventional example includes a rotation shaft501, a driving shaft502, a motor503, a driven pulley504, a driving pulley505and a belt506. A rotational driving force by the motor503is delivered to the rotation shaft501via the driving shaft502, the driving pulley505, the belt506and the driven pulley504. In the rotating device500of the conventional example, the driven pulley504does not have such a hollow portion as provided in the present exemplary embodiment. Therefore, if the chuck200is tilted, the driven pulley504is also tilted, following the rotation shaft501. Accordingly, the driven pulley504and the belt506do not come into appropriate contact with each other, so that the belt506becomes abnormal. Furthermore, since a tension applied to the belt506fluctuates, the rotation becomes instable. In contrast, in the present exemplary embodiment shown inFIG.7A, even if the rotation shaft210is tilted when the chuck200is tilted, the inclination of the rotation shaft210is absorbed by the hollow portion236and thus not delivered to the driving unit220. Therefore, the rotation is stable. Moreover, if the belt506and the rotation shaft501are directly connected as shown inFIG.7B, vibration of the motor503or the belt506is delivered to the rotation shaft501, resulting in instable rotation. In the present exemplary embodiment shown inFIG.7A, however, since the belt233and the rotation shaft210are separated through the hollow portion236as stated above, the vibration of the motor222or the belt233is not delivered to the rotation shaft210, so that the rotation is stabilized. (Transfer Unit) As illustrated inFIG.1, the transfer unit50is configured to be movable on a transfer path250extending in the Y-axis direction. The transfer unit50has a transfer arm251configured to be movable in the horizontal direction and the vertical direction and pivotable around a vertical axis (θ direction), and is capable of transferring the wafer W between the alignment unit60and the chuck200at the first processing position P1with this transfer arm251. The alignment unit60is configured to adjust a direction of the wafer W before being processed in the horizontal direction. The alignment unit60is equipped with a base260, a spin chuck261configured to hold and rotate the wafer W; and a detector262configured to detect a notch of the wafer W. A position of the notch of the wafer W is detected by the detector262while the wafer W held by the spin chuck261is being rotated, and by adjusting the position of the notch, the direction of the wafer W in the horizontal direction is adjusted. (Cleaning Unit) The cleaning unit70is configured to clean the rear surface W2of the wafer W. The cleaning unit70is disposed above the chuck200, and is equipped with a nozzle270configured to supply a cleaning liquid, for example, pure water onto the rear surface W2of the wafer W. The cleaning liquid is supplied from the nozzle270while the wafer W held by the chuck200is being rotated. The supplied cleaning liquid is diffused on the rear surface W2of the wafer W, so that the rear surface W2is cleaned. Further, the cleaning unit70may further have a function of cleaning the chuck200. In such a case, the cleaning unit70may be equipped with, for example, a nozzle (not shown) configured to supply the cleaning liquid to the chuck200and a stone (not shown) configured to come into contact with the chuck200and clean the chuck200physically. (Rough Grinding Unit) The rough grinding unit80is configured to grind the rear surface W2of the wafer W roughly. As depicted inFIG.8, the rough grinding unit80includes a base281and the grinding whetstone280supported at the base281. The base281is connected to a driver283via a spindle282. The driver283incorporates, for example, a motor (not shown), and is configured to move the grinding whetstone280and the base281in the vertical direction and rotate them. By respectively rotating the chuck200and the grinding whetstone280while keeping the wafer W held by the chuck200in contact with the ¼ arc portion of the grinding whetstone280, the rear surface W2of the wafer W is roughly ground. At this time, a grinding liquid, for example, water is supplied onto the rear surface W2of the wafer W. Further, in the present exemplary embodiment, though the grinding whetstone280is used as a grinding member for the rough grinding, the grinding member is not limited thereto. By way of non-limiting example, the grinding member may be a non-woven fabric containing abrasive grains, or the like. (Fine Grinding Unit) The fine grinding unit90is configured to grind the rear surface W2of the wafer W finely. A configuration of the fine grinding unit90is substantially the same as the configuration of the rough grinding unit80, and the find grinding unit90is equipped with the grinding whetstone290, a base291, a spindle292and a driver293. Here, however, a particle size of the grinding whetstone290for the fine grinding is smaller than that of the grinding whetstone280for the rough grinding. By respectively rotating the chuck200and the grinding whetstone290while supplying the grinding liquid onto the rear surface W2of the wafer W held by the chuck200in the state that the rear surface W2of the wafer W is in contact with the ¼ arc portion of the grinding whetstone290, the rear surface W2of the wafer W is ground. Like the grinding member for the rough grinding, the grinding member for the fine grinding is not limited to the grinding whetstone290. (Gettering layer forming unit) The gettering layer forming unit100is configured to form a gettering layer on the rear surface W2of the wafer W while removing, through a stress relief processing, a damage layer which is formed on the rear surface W2of the wafer W when the rough grinding and the fine grinding are performed on the rear surface W2of the wafer W. A configuration of this gettering layer forming unit100is substantially the same as that of the rough grinding unit80or the fine grinding unit90. The gettering layer forming unit100is equipped with a polishing whetstone, a base, a spindle and a driver. Here, however, a particle size of the polishing whetstone300is smaller than those of the grinding whetstones280and290. By rotating the chuck200and the polishing whetstone300respectively while keeping the rear surface W2of the wafer W held by the chuck200in contact with a ¼ arc portion of the polishing whetstone300, the rear surface W2of the wafer W is polished. Further, though dry polishing is performed in the gettering layer forming unit100according to the present exemplary embodiment, the exemplary embodiment is not limited thereto. By way of example, the rear surface W2may be polished while supplying a polishing liquid, for example, water to the rear surface W2of the wafer W. (Tape Thickness Measuring Unit) As shown inFIG.9AandFIG.9B, in the tape thickness measuring unit110, a thickness of the protective tape B on the wafer W held by the transfer arm251of the transfer unit50is measured. That is, the tape thickness measuring unit110is configured to measure the thickness of the protective tape B of the wafer W which is being transferred from the alignment unit60to the chuck200placed at the first processing position P1. As the tape thickness measuring unit100, a measurement device configured to measure the thickness of the protective tape B without coming into contact with the protective tape B is used. For example, a spectral interferometer may be used. As shown inFIG.9A, the tape thickness measuring unit110is equipped with a sensor310and a calculator311. The sensor310irradiates light having a preset wavelength range, for example, laser light to the protective tape B, and receives reflection light reflected from a front surface B1of the protective tape B and reflection light reflected from a rear surface B2of the protective tape B. The calculator311calculates the thickness of the protective tape B based on a phase difference between the two reflection lights received by the sensor310. As depicted inFIG.9B, the sensor310is configured to be moved by a moving mechanism (not shown) along a measurement line L which extends along a diameter of the protective tape B. As the sensor310is moved from one end of the protective tape B to the other end thereof, the tape thickness measuring unit110is capable of measuring a thickness distribution of the protective tape B in a diametrical direction. Further, as shown inFIG.10, the tape thickness measuring unit110may be equipped with a plurality of sensors310. These sensors310are respectively configured to be movable along measurement lines L1to L3which extend along the diameter of the protective tape B. In this configuration, the tape thickness measuring unit110is capable of measuring thickness distributions of the protective tape B along the respective measurement lines L1to L3. By averaging these thickness distributions along the measurement lines L1to L3, the thickness distribution of the protective tape B is obtained. Furthermore, though the spectral interferometer is used as the tape thickness measuring unit110in the present exemplary embodiment, the configuration of the tape thickness measuring unit110is not limited thereto, and any of various kinds of measurement devices may be used as long as the thickness of the protective tape B can be measured. Here, the purpose of measuring the thickness of the protective tape B by the tape thickness measuring unit110in the present exemplary embodiment will be explained. As shown inFIG.11AtoFIG.11D, the thickness of the protective tape B may be non-uniform within the surface thereof.FIG.11Aillustrates a case where a central portion of the protective tape B is thickener than both end portions thereof;FIG.11B, a case where the central portion of the protective tape B is thinner than both ends portions thereof;FIG.11C, a case where, on a half-surface of the protective tape B, a central portion is curved in a recessed shape;FIG.11D, a case where, on the half-surface of the protective tape B, the central portion is curved in a protruding shape. In these cases, if the rear surface W2of the wafer W is ground and polished in the rough grinding unit80, the fine grinding unit90and the gettering layer forming unit100in sequence, a relative thickness which is a sum of a thickness of the wafer W and the thickness of the protective tape B becomes uniform within the surface thereof. As a result, the thickness of the wafer W becomes non-uniform within the surface thereof. Thus, prior to performing the rough grinding of the rear surface W2of the wafer W in the rough grinding unit80, the thickness of the protective tape B is measured in the tape thickness measuring unit110. Then, based on a measurement result, the inclination of the chuck200is adjusted by the adjusting device205. At this time, the inclination of the chuck200in the rough grinding unit80, the inclination of the chuck200in the fine grinding unit90and the inclination of the chuck200in the gettering layer forming unit100are respectively adjusted. Below, an example where the inclination of the chuck200is adjusted in the rough grinding unit80will be elaborated. By way of example, for the protective tape B shown inFIG.11A, the chuck200is tilted by raising an end of the chuck200near the grinding whetstone280, as shown inFIG.12A. For the protective tape B shown inFIG.11B, the chuck200is tilted by lowering the end of the chuck200near the grinding whetstone280, as shown inFIG.12B. For the protective tape B shown inFIG.11C, by lowering the end of the chuck200near the grinding whetstone280as shown inFIG.12C, the chuck200is tilted such that a leading end of the grinding whetstone280comes into contact with a central portion (recessed portion) of the half-surface of the wafer W. For the protective tape B shown inFIG.11D, by lowering the end of the chuck200near the grinding whetstone280as shown inFIG.12D, the chuck200is tilted such that the leading end of the grinding whetstone280comes into contact with a central portion (recessed portion) of the whole surface of the wafer W. That is, by raising an elevation angle of the grinding whetstone280by tilting the chuck200, the grinding whetstone280is allowed to easily come into contact with the central portion and the peripheral portion of the whole surface of the wafer W. In this way, the thickness of the wafer W after the grinding and the polishing of the rear surface W2of the wafer W can be uniform within the surface thereof. (Relative Thickness Measuring Unit) The relative thickness measuring unit120is provided in each of the rough grinding unit80, the fine grinding unit90, and the gettering layer forming unit100. In the following, the relative thickness measuring unit120provided in the rough grinding unit80will be explained. As depicted inFIG.13, in the relative thickness measuring unit120, the relative thickness, which is the sum of the thickness of the wafer W and the thickness of the protective tape B, is measured for the wafer W which is being ground in the rough grinding unit80. The relative thickness measuring unit120is equipped with a first sensor320, a second sensor321and a calculator322. By way of example, a laser displacement meter may be used as the first sensor320. The first sensor320does not come into contact with the chuck200and is configured to measure a position (height) of a front surface of the chuck200where no porous201is provided. Here, if laser light is irradiated to the porous201, the laser light is absorbed by the porous201and not to be reflected. This is why the position where no porous201is provided is measured. This surface of the chuck200is regarded as a reference surface. The second sensor321may also be a laser displacement meter, for example. The second sensor321does not come into contact with the wafer W and is configured to measure a position (height) of the rear surface W2of the wafer W. In the present exemplary embodiment, although the first sensor320and the second sensor321are the laser displacement meters, the present exemplary embodiment is not limited thereto, and any of various kinds of measurement device capable of measuring a position of a measurement target in a non-contact manner can be used. The calculator322is configured to calculate the relative thickness by subtracting the position of the front surface of the chuck200measured by the first sensor320from the position of the rear surface W2of the wafer W measured by the second sensor321. The relative thickness is measured by the relative thickness measuring unit120while the rear surface W2of the wafer W is being roughly ground in the rough grinding unit80. A measurement result of the relative thickness measuring unit120is outputted from the calculator322to a controller340to be described later. The controller340monitors the relative thickness measured in the relative thickness measuring unit120and controls the rough grinding unit80such that the rough grinding is stopped when the relative thickness reaches a preset thickness. By using the relative thickness measuring unit120in this way, an end point (termination) of the rough grinding can be found. Further, as mentioned above, the relative thickness measuring unit120is provided in the fine grinding unit90and the gettering layer forming unit100as well. The relative thickness measuring unit120measures the relative thickness in each unit and is thus capable of finding an end point of the fine grinding and an end point of the polishing in the gettering layer formation. Further, the relative thickness measuring unit120of the present exemplary embodiment is capable of measuring the relative thickness without coming into contact with the wafer W and the chuck200. Here, if a contact type measurement device is used as in conventional cases, that is, if the relative thickness is measured in a state that the measurement device is in contact with the rear surface W2of the wafer W, the contract portion is rubbed to have a flaw. Further, depending on a device formed on the front surface of the wafer W, such a contact type measurement device may not be used. On this ground, the relative thickness measuring unit120of the present exemplary embodiment has advantages. Here, in the rough grinding unit80and the fine grinding unit90, the rear surface W2of the wafer W is ground while supplying water as a grinding liquid onto the rear surface W2. Accordingly, a water layer D is formed on the rear surface W2of the wafer W, as depicted inFIG.14. In such a case, when the laser light is irradiated to the rear surface W2from the second sensor321, the water layer D or air bubbles included in the water layer D may become noises, rendering it difficult to measure the position of the rear surface W2of the wafer W accurately. Thus, it is desirable to provide a nozzle323as a fluid supply at a bottom surface of the second sensor321. The nozzle323is configured to jet, for example, air A as a fluid along an optical path P of the laser light from the second sensor321to surround the optical path P. In such a case, the air A serves as a wall and blows the water layer D away, so that a spot (optical axis spot) of the rear surface W2where the laser light is irradiated to and reflected from can be set in a dry environment. Thus, without being affected by the water layer D, the position of the rear surface W2can be measured by the second sensor321more accurately. Furthermore, the fluid jetted from the nozzle323may not be limited to the air. For example, water, which is the same as the water layer D, may be used. In such a case, a column of water is formed from the second sensor321to the rear surface W2. Since a refractive index of the laser light does not change between the water jetted from the nozzle323and the water layer D, the position of the rear surface W2can be measured more accurately. (Cleaning Apparatus) The cleaning apparatus31shown inFIG.1is configured to clean the rear surface W2of the wafer W which is ground and polished by the processing apparatus30. To elaborate, while rotating the wafer W held by a spin chuck330, a cleaning liquid, for example, pure water is supplied onto the rear surface W2of the wafer W. The supplied cleaning liquid is diffused on the rear surface W2of the wafer W, so that the rear surface W2is cleaned. (Controller) The above-described substrate processing system1is equipped with the controller340as shown inFIG.1. The controller340is, for example, a computer and includes a program storage (not shown). A program for controlling a processing performed on the wafer W in the substrate processing system1is stored in the program storage. Further, the program storage also stores therein a program for implementing a wafer processing to be described later in the substrate processing system1by controlling the above-described various processing apparatuses and a driving system such as the transfer devices. Further, the programs may be recorded in a computer-readable recording medium H such as a hard disk (HD), a flexible disk (FD), a compact disk (CD), a magnet optical disk (MO) or a memory card, and may be installed from this recording medium H to the controller340. (Wafer Processing) Now, a wafer processing performed by using the substrate processing system1having the above-described configuration will be discussed.FIG.15is a flowchart illustrating an example of major processes of this wafer processing. First, a cassette C accommodating therein a plurality of wafers W is placed on the cassette placing table10of the carry-in/out station2. To suppress a deformation of the protective tape B, each wafer W is accommodated in the cassette C such that the front surface of the wafer W to which the protective tape B is attached faces upwards. Then, a wafer W is taken out of the cassette C and transferred into the processing apparatus30of the processing station3by the wafer transfer device22. At this time, the front surface and the rear surface of the wafer W are inverted by the transfer arm23such that the rear surface W2of the wafer W faces upwards. The wafer W transferred into the processing apparatus30is delivered onto the spin chuck261of the alignment unit60. Then, a direction of the wafer Win the horizontal direction is adjusted by the alignment unit60(process S1ofFIG.15). Subsequently, while the wafer W is being transferred by the transfer unit50, the thickness of the protective tape B is measured by the tape thickness measuring unit110(process S2ofFIG.15). Then, based on the measurement result of the thickness of the protective tape B, the inclination of the chuck200in the rough grinding unit80, the inclination of the chuck200in the fine grinding unit90and the inclination of the chuck200in the gettering layer forming unit100are individually adjusted by the adjusting device205(process S3ofFIG.15). By way of example, for the protective tape B shown inFIG.11AtoFIG.11D, by adjusting the inclination of the chuck200as shown inFIG.12AtoFIG.12D, the thickness of the wafer W after being ground and polished can be controlled to be uniform within the surface thereof. Meanwhile, in case that the protective tape B has, on the half-surface of the wafer W, a wave shape having a multiple number of protrusions and recesses, the thickness of the wafer W cannot be uniform just by adjusting the inclination of the chuck200. That is, only by adjusting how the grinding whetstones280and290and the polishing whetstone300come into contact with the rear surface W2of the wafer W, the thickness of the wafer W cannot be uniform. In such a case, a subsequent grinding and polishing processing upon this wafer W is stopped. Then, the wafer W is returned back into the cassette C of the cassette placing table10by the wafer transfer device22to be collected. Further, in the present exemplary embodiment, though the inclination of the chuck200of each of the rough grinding unit80, the fine grinding unit90and the gettering layer forming unit100is adjusted, the inclination of the chuck200of only the rough grinding unit80may be adjusted. Subsequently, the wafer W is delivered onto the chuck200at the first processing position P1by the transfer unit50. Thereafter, by rotating the turntable40by 90 degrees in the counterclockwise direction, the chuck200is moved to the second processing position P2. Then, the rear surface W2of the wafer W is roughly ground by the rough grinding unit80(process S4ofFIG.15). As the inclination of the chuck200is appropriately adjusted based on the measurement result of the tape thickness measuring unit110, the wafer W can be ground to have a uniform thickness within the surface thereof. Furthermore, since the end point of the rough grinding is investigated based on the measurement result of the relative thickness measuring unit120, the wafer W can be ground to have the appropriate thickness. Furthermore, a grinding amount by the rough grinding is set based on the thickness of the wafer W before being thinned and the required thickness of the wafer W after being thinned. Thereafter, the turntable40is further rotated by 90 degrees in the counterclockwise direction, and the chuck200is moved to the third processing position P3. Then, the rear surface W2of the wafer W is finely ground by the fine grinding unit90(process S5ofFIG.15). At this time, since the inclination of the chuck200is appropriately adjusted based on the measurement result of the tape thickness measuring unit110, the wafer W can be ground to have a uniform thickness within the surface thereof. Furthermore, by finding the end point of the fine grinding based on the measurement result of the relative thickness measuring unit120, the wafer W can be ground to have the appropriate thickness. The wafer W is ground up to the thickness after being thinned, which is required as a product. Thereafter, by further rotating the turntable40by 90 degrees in the counterclockwise direction, the chuck200is moved to the fourth processing position P4. Then, while performing the stress relief processing, the gettering layer is formed on the rear surface W2of the wafer W by the gettering layer forming unit100(process S6ofFIG.15). At this time, since the inclination of the chuck200is appropriately adjusted based on the measurement result of the tape thickness measuring unit110, the wafer W can be polished to have a uniform thickness within the surface thereof. Furthermore, since the end point of the polishing is investigated based on the measurement result of the relative thickness measuring unit120, the wafer W can be polished to have the appropriate thickness. Afterwards, by further rotating the turntable40by 90 degrees in the counterclockwise direction or 270 degrees in the clockwise direction, the chuck200is moved to the first processing position P1. Then, the rear surface W2of the wafer W is cleaned by the cleaning liquid in the cleaning unit70(process S7ofFIG.15). Subsequently, the wafer W is transferred into the cleaning apparatus31by the wafer transfer device22. In the cleaning apparatus31, the rear surface W2of the wafer W is cleaned by the cleaning liquid (process S8ofFIG.15). The cleaning of the rear surface W2of the wafer W is also performed in the cleaning unit70of the processing apparatus30. In the cleaning unit70, however, a rotation speed of the wafer W is low, and the cleaning is performed to remove contaminants only to some degree, for example, to the extent that the transfer arm23of the wafer transfer device22is not contaminated. Meanwhile, in the cleaning apparatus31, the rear surface W2of this wafer W is further cleaned to a required degree of cleanness. Then, the wafer W after being subjected to all the required processings is transferred back into the cassette C on the cassette placing table10by the wafer transfer device22. Then, a series of the wafer processings in the substrate processing system1is ended. According to the present exemplary embodiment as described above, since the thickness of the protective tape B is measured in the tape thickness measuring unit110before the rear surface W2of the wafer W is roughly ground in the rough grinding unit80, the inclination of the chuck200is adjusted based on this measurement result, and the way how the grinding whetstones280and290and the polishing whetstone300come into contact with the rear surface W2of the wafer W can be adjusted. Thus, even if the thickness of the protective tape B is non-uniform within the surface thereof, the thickness of the wafer W after being subject to the grinding and the polishing can be uniform within the surface thereof. Conventionally, to uniform the thickness of the wafer W, a feedback control, in which the thickness of the wafer W is actually measured after the fine grinding and a processing condition for the grinding processing is corrected based on the measurement result, has been performed. In such a case, however, the grinding whetstone290needs to be separated from the fine grinding unit90, and a sensor for measuring the thickness needs to be installed. Thus, it takes time to complete the grinding processing. In the present exemplary embodiment, however, since the thickness of the protective tape B is measured and the inclination of the chuck200is adjusted before the grinding processing, the time for the grinding processing can be shortened. Accordingly, the throughput of the wafer processing can be improved. Furthermore, even if the inclination of the chuck200is adjusted as stated above, the rotation shaft210at the driven side and the driving unit220at the driving side are operated independently because the hollow portion236is formed in the driven pulley231of the rotating device204. That is, though the rotational driving force by the driving unit220is appropriately delivered to the rotation shaft210, the inclination of the chuck200(inclination of the rotation shaft210) is not delivered to the driving unit220. Therefore, the chuck200can be rotated appropriately. Further, since the relative thickness is individually measured by the relative thickness measuring unit120during the rough grinding in the rough grinding unit80, during the fine grinding in the fine grinding unit90and during the polishing in the gettering layer forming unit100, the end points of the rough grinding, the fine grinding and the polishing can be found. Therefore, the wafer W can be ground and polished to have the appropriate thickness. Moreover, according to the above-described exemplary embodiment, the rough grinding of the rear surface of the wafer W in the rough grinding unit80, the fine grinding of the rear surface of the wafer Win the fine grinding unit90, the formation of the gettering layer in the gettering layer forming unit100, and the cleaning of the rear surface of the wafer Win the cleaning unit70and the cleaning apparatus31can be performed on a plurality of wafers W continuously in the single substrate processing system1. Therefore, the wafer processing can be performed efficiently within the single substrate processing system1, so that a throughput can be improved. <Other Examples of Tape Thickness Measuring Unit> Now, other examples of the tape thickness measuring unit110will be described. The tape thickness measuring unit110can be placed at any position as long as it is capable of performing the thickness measurement before the rear surface W2of the wafer W is roughly ground in the rough grinding unit80. That is, the tape thickness measuring unit110can be placed at any position in a wafer transfer path ranging from the carry-in/out station2to the rough grinding unit80. First Modification Example As depicted inFIG.16, the tape thickness measuring unit110may be provided in the alignment unit60. This tape thickness measuring unit110is equipped with a first sensor400, a second sensor401and a calculator402. The first sensor400may be, by way of example, a laser displacement meter. The first sensor400is configured to measure a position (height) of a front surface of the base260. This front surface of the base260is regarded as a reference surface. The second sensor401may also be, for example, a laser displacement meter. The second sensor401is configured to measure a position (height) of the rear surface W2of the wafer W. Further, in the present exemplary embodiment, though the laser displacement meters are used as the first sensor400and the second sensor401, the present exemplary embodiment is not limited thereto, and any of various kinds of measurement devices capable of measuring a position of a measurement target in a non-contact manner can be used. The calculator402calculates the thickness of the protective tape B by subtracting, from the position of the rear surface W2of the wafer W measured by the second sensor401, the position of the front surface of the base260measured by the first sensor400and a sum of a previously investigated thickness of the wafer W and a distance between a front surface of the spin chuck261and the front surface of the base260. Further, in the present first modification example, the second sensor401may measure a position of the rear surface W2of the wafer W after measuring a position of the front surface of the spin chuck261, for example. In the calculator402, the relative thickness is calculated by subtracting the position of the front surface of the spin chuck261from the position of the rear surface W2of the wafer W, and the thickness of the protective tape B is calculated by subtracting the previously investigated thickness of the wafer W from the relative thickness. In such a case, however, it is desirable that the spin chuck261has the same size as the wafer W when viewed from the top. Moreover, though the thickness of the protective tape B is measured by using the first sensor400, the second sensor401and the calculator402in the first modification example, the thickness of the protective tape B may be directly measured by using, for example, a spectral interferometer. In such a case, light having a wavelength range capable of penetrating the wafer W, for example, infrared light is used. Second Modification Example The tape thickness measuring unit110may be provided at an outside of the processing apparatus30. In such a configuration, the tape thickness measuring unit110is connected to, for example, the wafer transfer area20. As depicted inFIG.17, the tape thickness measuring unit110has a placing table410configured to place and hold the wafer W thereon. In case that the wafer W is held by the placing table410, a spectral interferometer equipped with a sensor411and a calculator412is used as the tape thickness measuring unit110. The sensor411and the calculator412have the same configurations as the sensor310and the calculator311of the above-described exemplary embodiment. Further, in case that the protective tape B is held by the placing table410(in case that the front surface and the rear surface of the wafer W are reverse to the shown example), sensors (not shown) and a calculator (not shown) having the same configurations as the first and second sensors400and401and the calculator402of the above-described exemplary embodiment are used in the tape thickness measuring unit110. In any cases, the thickness of the protective tape B can be measured in the tape thickness measuring unit110. In any of the above-described first and second modification examples, since the thickness of the protective tape B can be measured before the rough grinding in the rough grinding unit80is performed, the same effects as obtained in the above-described exemplary embodiment can be achieved. <Other Examples of Chuck Rotating Device> In the above-described exemplary embodiment shown inFIG.4AandFIG.4B, the hollow portion236is formed between the inner driven pulley231aand the outer driven pulley231bin the driving force transmitter230of the rotating device204. However, as illustrated inFIG.18AandFIG.18B, a flexible member420may be filled between the inner driven pulley231aand the outer driven pulley231binstead of the hollow portion236. In such a configuration, the inner magnet234and the outer magnet235are omitted. The flexible member420is not particularly limited as long as it delivers the rotational driving force by the driving unit220to the rotation shaft210but does not deliver the inclination of the rotation shaft210to the driving unit220. By way of non-limiting example, a plurality of pins having flexibility may be used as the flexible member420, or a diaphragm (membrane) may be used as the flexible member420and this diaphragm may be transformed. By using the flexible member420, the same effects as obtained in the above-described exemplary embodiments can be achieved. Moreover, in the rotating device204according to the above-described exemplary embodiments, the rotation shaft210and the driving unit220are provided independently. However, a driving unit (not shown) of a direct drive type, for example, may be provided at the supporting table211of the rotation shaft210. <Other Examples of Chuck Adjusting Device> In the above-described exemplary embodiment shown inFIG.3, the inclination of the chuck200is adjusted by the adjusting device205. However, inclinations of the grinding whetstones280and290and the polishing whetstone300may be adjusted instead, or both the inclination of the chuck200and the inclinations of the grinding whetstones280and290and the polishing whetstone300may be adjusted. <Other Exemplary Embodiments of Substrate Processing System> In the substrate processing system1according to the above-described exemplary embodiment shown inFIG.1, the gettering layer forming unit100is provided within the processing apparatus30. However, a gettering layer forming apparatus (not shown) having the same configuration as the gettering layer forming unit100may be independently provided at the outside of the processing apparatus30. In such a case, the processing apparatus30may include the rough grinding unit80, a medium grinding unit (not shown) and the fine grinding unit90. From the foregoing, it will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting. The scope of the inventive concept is defined by the following claims and their equivalents rather than by the detailed description of the exemplary embodiments. It shall be understood that all modifications and embodiments conceived from the meaning and scope of the claims and their equivalents are included in the scope of the inventive concept. EXPLANATION OF CODES According to the exemplary embodiments, the grinder can be brought into contact with the substrate appropriately even if the thickness of the protective tape is non-uniform within the surface thereof. Therefore, the substrate can be thinned to have the uniform thickness within the surface thereof. | 47,835 |
11858093 | DETAILED DESCRIPTION OF EMBODIMENTS The technical solutions of the present application will be further described in detail below with reference to the embodiments, but the protection scope of the present application is not limited thereto. Ongoing research to improve the quality of epitaxial processing of silicon carbide material in semiconductor applications is believed to yield advancements in high power, high voltage or high temperature semiconductor applications, among others. However, the hardness of silicon carbide material is second only to diamond, rendering the processing difficult. At present, the finishing of silicon carbide epitaxial wafers mainly adopts a free grinding and polishing process, which has disadvantages such as low processing efficiency, low grinding profile accuracy, high cost, poor control of product quality stability, and insufficient environmental friendliness. Aspects of various embodiments disclosed herein address these and other challenges. In the following examples, the raw materials involved are commercially-available products or can be prepared by referring to techniques available in the art. Specifically, an example grain size of diamond abrasive can be in a range from about 10000 #-15000 #, and an example grain size of ordinary abrasive is (about) 10000 #, which are commercially-available products. An example resin bonding agent can be cashew-nut-oil-modified phenolic resin powder with a grain size of 40-60 μm, which is available for purchase from Tongcheng New Material. An example hexagonal boron nitride can be a water-soluble nanosheet of hexagonal boron nitride, and the grain size of its lateral size can be several hundred nanometers, which can be prepared by referring to a published Master's Thesis “Preparation of Water-soluble Hexagonal Boron Nitride Nanosheet and Its Application in Composite Materials” in Shantou University, China. An example ceramic powder can be foam ceramic, and the diameter of the ceramic powder can be 60-70 μm. The boron powder can have a grain size of 3 μm or about 3 μm. The pre-alloy powder bonding agent can be, for example, Bi-30Pb-15Sn-9Cd with a grain size of 200-300 μm, which is a commercially-available product. EXAMPLE 1 A composite binding agent grinding wheel is composed of a matrix and an abrasive layer. The weight percentage of each raw material of the abrasive layer is: 45% of pretreatment abrasive, 20% of resin bonding agent, 10% of prealloy powder bonding agent, 7% of hexagonal boron nitride, 10% of silicon dioxide, 5% of ceramic powder, and 3% of boron powder. The pretreatment abrasive is composed of the following raw materials in weight percentage: 57% of diamond abrasive, 19% of ordinary abrasive (white corundum), 22% of PES, and 2% of titanate coupling agent. The pretreatment abrasive is prepared by the following pretreatment process:1) dissolving PES in DMF, mechanically stirring and mixing the resultant uniformly (which can be heated to 60° C. and put in the oven for 1 h to make it dissolved completely), to prepare a PES/DMF solution with a mass concentration of 20%;2) adding the titanate coupling agent to the DMF in a mass ratio of 1:100, and mixing the resultant uniformly by high-frequency vibration for 10 min;3) adding the diamond abrasive and the ordinary abrasive, at a mass ratio of 3:1, to the mixed solution obtained in 2), and mixing the resultant uniformly by high-frequency vibration and mechanically mixing for 1 h;4) baking the product obtained in step 3) in an oven at 80° C. for 3 h, until the weight loss mass ratio of DMF in the solution is 50%;5) adding the mixed solution obtained in step 4) to the PES/DMF solution obtained in step 1), and mixing the resultant uniformly by ultrasonic and mechanical stirring for 3 h, to obtain a pretreatment mixed solution;6) putting the pretreatment mixed solution obtained in step 5) into an industrial-grade plastic syringe with a diameter of 25.3 mm and a height of 170 mm, which is connected with a metal needle (nozzle diameter: 2 mm); and then applying a voltage of 50 kV to make the pretreatment mixed solution electrosprayed from the metal nozzle of the plastic syringe into a container containing pure water at an injection speed of 50 mm/min, wherein the mixed abrasive wrapped by PES precipitates out of the water; and7) drying the product obtained in step 6) to obtain a spherical pretreatment abrasive with a grain size about 50 μm. The method for preparing the above composite binding agent grinding wheel specifically includes the following steps:a) putting the pretreatment abrasive, resin bonding agent, prealloy powder bonding agent and ceramic powder into an ultrasonic vibrating screen of 200 mesh, mixing the resultant for 30 min to be uniform for later use;b) putting boron powder, hexagonal boron nitride and silicon oxide into anhydrous ethanol at a solid-to-liquid ratio (g/g) of 1:50, and mixing the resultant uniformly by ultrasonic and mechanical stirring for 2 h, then putting the resultant into a vacuum oven with a vacuum degree of −0.05 MPa and a temperature of 60° C. and baking the same for about 2 h to obtain a non-agglomerated mixture with an ethanol content of 5% by mass, and placing this mixture in a high-frequency vibrator to vibrate and mix the same for 2 min, then drying the resultant in a vacuum oven at 60° C. for later use;c) mixing the mixed materials prepared in step a) and step b), and then mixing the resultant for 1 h by putting the resultant into an ultrasonic vibrating screen of 150 mesh to obtain a uniformly mixed molding material;d) feeding the molding material into the assembled mold, and heating the molding material to 160° C. using microwave heating method within 2 min, then moving the mold to a vacuum press with a temperature of 160° C., applying a pressure of 120 MPa to vacuumize to −0.04-0.08 MPa, raising the temperature of the press to 240° C., maintaining the temperature for 8 h, taking out the mold to cool to room temperature and unloading the mold to obtain the grinding wheel block;e) processing the grinding wheel block into a diamond-shape structure (a quadrangular diamond shape with an acute angle of 60°, seeFIG.2), and then bonding the resultant to the microporous copper matrix (seeFIG.1). In this application, the microporous copper matrix with a pore diameter of 50-260 μm produced by Xinxiang Ruitong Filter Equipment Manufacturing Co., Ltd. is used. EXAMPLE 2 A composite binding agent grinding wheel is composed of a matrix and an abrasive layer. The weight percentage of each raw material of the abrasive layer is: 62% of pretreatment abrasive, 8% of resin bonding agent, 8% of prealloy powder bonding agent, 5% of hexagonal boron nitride, 8% of silicon dioxide, 8% of ceramic powder, and 1% of boron powder. Reference can be made to Example 1 for the raw material ratio and preparation method of the pretreatment abrasive. The method for preparing the above composite binding agent grinding wheel specifically includes the following steps:a) putting the pretreatment abrasive, resin bonding agent, prealloy powder bonding agent and ceramic powder into a ultrasonic vibrating screen of 200 mesh, mixing the resultant for 30 min to be uniform for later use;b) putting boron powder, hexagonal boron nitride and silicon oxide into anhydrous ethanol at a solid-to-liquid ratio (g/g) of 1:50, and mixing the resultant uniformly by ultrasonic and mechanical stirring for 2 h, then putting the resultant into a vacuum oven with a vacuum degree of −0.04 MPa and a temperature of 60° C. and baking the same for about 2 h to obtain a non-agglomerated mixture with an ethanol content of 5% by mass, and placing this mixture in a high-frequency vibrator to vibrate and mix the same for 2 min, then drying the resultant in a vacuum oven at 60° C. for later use;c) mixing the mixed materials prepared in step a) and step b), and then mixing the resultant for 1 h by putting the resultant into an ultrasonic vibrating screen of 150 mesh to obtain a uniformly mixed molding material;d) feeding the molding material into the assembled mold, and heating the molding material to 170° C. using microwave heating method within 2 min, then moving the mold to a vacuum press with a temperature of 160° C., applying a pressure of 120 MPa to vacuumize to −0.04-0.08 MPa, raising the temperature of the press to 280° C., maintaining the temperature for 8 h, taking out the mold to cool to room temperature and unloading the mold to obtain the grinding wheel block;e) processing the grinding wheel block into a diamond-shape structure (a quadrangular diamond shape with an acute angle of 60° and then bonding the resultant to the microporous copper matrix. In this application, the microporous copper matrix with a pore diameter of 50-260 μm produced by Xinxiang Ruitong Filter Equipment Manufacturing Co., Ltd. is used. EXAMPLE 3 A composite binding agent grinding wheel is composed of a matrix and an abrasive layer. The weight percentage of each raw material of the abrasive layer is: 50% of pretreatment abrasive, 12% of resin bonding agent, 12% of prealloy powder bonding agent, 6% of hexagonal boron nitride, 5% of silicon dioxide, 13% of ceramic powder, and 2% of boron powder. Reference can be made to Example 1 for the raw material ratio and preparation method of the pretreatment abrasive. The method for preparing the above composite binding agent grinding wheel specifically includes the following steps:a) putting the pretreatment abrasive, resin bonding agent, prealloy powder bonding agent and ceramic powder into a ultrasonic vibrating screen of 200 mesh, mixing the resultant for 30 min to be uniform for later use;b) putting boron powder, hexagonal boron nitride and silicon oxide into anhydrous ethanol at a solid-to-liquid ratio (g/g) of 1:50, and mixing the resultant uniformly by ultrasonic and mechanical stirring for 2 h, then putting the resultant into a vacuum oven with a vacuum degree of −0.06 MPa and a temperature of 60° C. and baking the same for about 2 h to obtain a non-agglomerated mixture with an ethanol content of 5% by mass, and placing this mixture in a high-frequency vibrator to vibrate and mix the same for 2 min, then drying the resultant in a vacuum oven at 60° C. for later use;c) mixing the mixed materials prepared in step a) and step b), and then mixing the resultant for 1 h by putting the resultant into an ultrasonic vibrating screen of 150 mesh to obtain a uniformly mixed molding material;d) feeding the molding material into the assembled mold, and using microwave heating to heat the molding material to 150° C. within 2 min, then moving the mold to a vacuum press with a temperature of 160° C., applying a pressure of 120 MPa to vacuumize to −0.04-0.08 MPa, raising the temperature of the press to 260° C., maintaining the temperature for 8 h, taking out the mold to cool to room temperature and unloading the mold to obtain the grinding wheel block;e) processing the grinding wheel block into a diamond-shape structure (a quadrangular diamond shape with an acute angle of 60° and then bonding the resultant to the microporous copper matrix. In this application, the microporous copper matrix with a pore diameter of 50-260 μm produced by Xinxiang Ruitong Filter Equipment Manufacturing Co., Ltd. is used. EXAMPLE 4 A composite binding agent grinding wheel is composed of a matrix and an abrasive layer. The weight percentage of each raw material of the abrasive layer is: 52% of pretreatment abrasive, 13% of resin bonding agent, 6% of alloy powder, 12% of hexagonal boron nitride, 7% of silicon dioxide, 8% of ceramic powder, and 2% of boron powder. Reference can be made to Example 1 for the raw material ratio and preparation method of the pretreatment abrasive. Reference can be made to Example 1 for the preparation method of the above composite binding agent grinding wheel. COMPARATIVE EXAMPLE 1 The pretreatment abrasive in Example 1 is changed to an abrasive that has not undergone pretreatment (that is, the abrasive formula remains unchanged, but the pretreatment process is not proceeded), and the rest refers to Example 1 to prepare the grinding wheel. COMPARATIVE EXAMPLE 2 Conventional grinding wheel formula, the weight percentage of each composition of the raw material thereof is: 45% of diamond abrasive, 25% of phenolic resin powder, 20% of silicon carbide, 3% of chromic oxide, and 7% of white corundum. With this formula, the grinding wheel is prepared by the conventional hot pressing method. COMPARATIVE EXAMPLE 3 The hexagonal boron nitride in Example 3 is changed to graphite; and the structure of the grinding wheel block is changed to a common arc structure to prepare a grinding wheel. Grinding Test When the grinding and polishing liquid is used for processing, that is, titanium dioxide and cerium oxide are used as the grinding and polishing liquid for the abrasives to process the four-inch silicon carbide epitaxial wafer, the material removal rate is less than 0.3 μm/h. Polishing of 3 μm takes 10 h, and the efficiency is extremely low. It requires more than 20 L of polishing liquid. The surface roughness of the workpiece is Ra=0.3 nm, wherein TTV<4 μm. Compared with the grinding wheel of the present application, the grinding efficiency is low, the profile accuracy is poor, the pollution is large, and the cost is high. The following table shows the grinding effects of the grinding wheels prepared in Examples 1 to 4 and Comparative Examples 1 to 3. It can be seen from Table 1 that compared with the comparative examples, the grinding wheel of the present application has higher grinding efficiency, better profile accuracy, less pollution, and lower cost. The grinding wheels prepared by the comparative examples often have lower processing efficiency, and substandard grinding surface quality, which cannot work continuously; and the surface of the workpiece has coarse grinding lines, poor profile accuracy, frequent repairs and other defects. TABLE 1Grinding comparison results of grinding wheels preparedin different examples and comparative examplesComparisonExampleProcessing ConditionProcessing Effectof ResultsExample 14-inch silicon carbidematerial removal rate ofhigherepitaxial wafer, machine108 μm/h, processing timeefficiency,tool: Korea AM,of 0.046 h, Ra = 0.28 nm,betterrotational speed ofTTV < 3 μm. It can performprofile1500 rpm, grindingprocessing continuouslyaccuracy,for 5 μm, single feedwithout repairinglowerof 0.03 μm/s.and maintaining.pollutionand lower costExample 24-inch silicon carbidematerial removal rate ofepitaxial wafer, machine90 μm/h, processing timetool: Korea AM,of 0.044 h, Ra = 0.25 nm,rotational speed ofTTV < 2.6 μm. It can1650 rpm, grindingperform processingfor 4 μm, singlecontinuously withoutfeed of 0.025 μm/s.repairing and maintaining.Example 34-inch silicon carbidematerial removal rate ofepitaxial wafer, machine54 μm/h, processing timetool: Korea AM,of 0.056 h, Ra = 0.22 nm,rotational speedTTV < 1.8 μm. It canof 1900 rpm,perform processinggrinding for 3 μm,continuously withoutsingle feed ofrepairing and maintaining.0.015 μm/s.Example 44-inch silicon carbidematerial removal rate ofepitaxial wafer,54 μm/h, processing timemachine tool: Koreaof 0.074 h, Ra = 0.19 nm,AM, rotationalTTV < 2 μm. It canspeed of 1800 rpm,perform processinggrinding for 4 μm,continuously withoutsingle feed ofrepairing and maintaining.0.015 μm/s.Comparative4-inch silicon carbideThe surface of theTheExample 1epitaxial wafer,workpiece is burnt whenprocessingmachine tool: Koreathe feed of grinding wheelefficiencyAM, rotationalis 0.03 μm/s in a singleis low, thespeed of 1500 rpm,time; when the feed rate isgrindinggrinding for 5 μm,reduced to 0.01 μm/s, thequality issingle feedgrinding wheel can benot up toof 0.03 μm/s.repaired and maintainedstandard,after grinding 2 piecesandcontinuousworkcannot beperformed.Comparative4-inch silicon carbideThe work cannot beThe workExample 2epitaxial wafer, machineperformed even whencannot betool: Korea AM,the grinding feed rate isperformed.rotational speed ofadjusted. The surface of1650 rpm, grindingthe grinded workpiecefor 4 μm.is burnt and cracked,which cannot meet thegrinding requirements.Comparative4-inch silicon carbideAfter grinding 4 pieces,The surfaceExample 3epitaxial wafer, machinethe grinding wheelof thetool: Korea AM,can be repaired andworkpiecerotational speed ofmaintained, and thehas coarse1900 rpm, grindingsurface roughnessgrindingfor 3 μm, singleof the workpiecelines, andfeed of 0.015 μm/s.reaches Ra = 0.5 nm;poor profileTTV < 3.6 μm.accuracy,and isbenefittedby frequentrepairing andmaintaining. In summary, it can be concluded that the resin super-hard grinding wheel prepared by the present application can achieve nano-level grinding surface quality when grinding epitaxial wafers, and the grinding wheel has strong self-sharpening and high sharpness. It has obvious advantages in the finishing of silicon carbide epitaxial wafers in back thinning processing, which can solve the current problem of back processing of silicon carbide epitaxial wafers. As utilized herein, relative terms or terms of degree such as approximately, substantially or like relative terms such as about, roughly and so forth, are intended to incorporate ranges and variations about a qualified term reasonably encountered by one of ordinary skill in the art in fabricating or compiling the embodiments disclosed herein, where not explicitly specified otherwise. For instance, a relative term can refer to ranges of manufacturing tolerances associated with suitable manufacturing equipment (e.g., injection molding equipment, extrusion equipment, solution mixing equipment, precipitation equipment, solution baking or drying equipment, and so forth) for realizing a mixture, solution, structure, apparatus or the like from a disclosed illustration or description. In some embodiments, depending on context and the capabilities of one of ordinary skill in the art, relative terminology can refer to a variation in a disclosed quantity, range of quantities or a disclosed characteristic; e.g., a 0 to 2-percent variance, a 0 to 3-percent variance, a 0 to five-percent variance or a zero to ten-percent variance from precise mathematically defined value or characteristic, or any suitable value or range there between based on suitable fabrication equipment and accuracy thereof, can define a scope for a disclosed term of degree. These or similar variances can be applicable to other contexts in which a term of degree is utilized herein such as timing of a computer-controlled signal (e.g., in mixing, heating or extraction process), accuracy of measurement of a physical effect (e.g., a temperature of solution or solute, a mass weight, a relative mass ratio, etc.) or the like. In regard to the various functions performed by the above described components, machines, apparatuses, devices, processes, control operations and the like, the terms (including a reference to a “means”) used to describe such components, etc., are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the embodiments. In this regard, it will also be recognized that the embodiments include a system as well as mechanical structures, mechanical drives, electronic or electro-mechanical drive controllers, and electronic hardware configured to implement the functions, or a computer-readable medium having computer-executable instructions for performing the acts or events of the various processes or control operations described herein. In addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” and “including” and variants thereof are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising.” As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. In other embodiments, combinations or sub-combinations of the above disclosed embodiments can be advantageously made. Moreover, embodiments described in a particular drawing or group of drawings should not be construed as being limited to those illustrations. Rather, any suitable combination or subset of elements from one drawing(s) can be applied to other embodiments in other drawings where suitable to one of ordinary skill in the art to accomplish objectives disclosed herein, objectives known in the art, or objectives and operation reasonably conveyed to one of ordinary skill in the art by way of the context provided in this specification. Where utilized, block diagrams of the disclosed embodiments or flow charts are grouped for ease of understanding. However, it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present disclosure. Based on the foregoing it should be understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims. | 22,304 |
11858094 | DETAILED DESCRIPTION Embodiments of the present disclosure will now be described with reference to the drawings. FIG.1is a side view of a rechargeable impact driver as an example of an impact tool.FIG.2is a longitudinal central sectional view of the impact driver. An impact driver1includes a body2and a grip3. The body2includes a central axis extending in the front-rear direction. The grip3protrudes downward from the body2. The impact driver1includes a housing including a body housing4, a rear cover5, and a hammer case6. The body housing4includes a motor housing7, a grip housing8, and a battery mount9. The motor housing7is cylindrical and defines a rear portion of the body2. The grip housing8defines the grip3. The battery mount9receives a battery pack10, which serves as a power supply. The body housing4and the rear cover5are formed from resin. The body housing4includes left- and right-half housings4aand4b. The left- and right-half housings4aand4bare joined together with multiple screws11placed from the right. The rear cover5is a cap. The rear cover5is joined to the motor housing7from the rear with two screws, or right and left screws. The hammer case6is formed from metal. The hammer case6is joined to a front portion of the motor housing7. The hammer case6defines a front portion of the body2. Lamps (not shown) for illuminating ahead are located on the right and left of the hammer case6between the hammer case6and the motor housing7. The body2accommodates, from the rear, a brushless motor12, a reduction assembly13, a spindle14, and a striking assembly15. The brushless motor12is accommodated in the motor housing7and the rear cover5. The reduction assembly13, the spindle14, and the striking assembly15are accommodated in the hammer case6. The striking assembly15includes an anvil16. The anvil16has a front end protruding frontward from the hammer case6. The grip3accommodates a switch17in its upper portion. A trigger18protrudes in front of the switch17. A forward-reverse switch lever19for the brushless motor12is located between the hammer case6and the switch17. A mode switch20is located in front of the forward-reverse switch lever19. The mode switch20faces frontward and has a button exposed on the front surface. The button is repeatedly pressed to switch impact forces or registered striking modes. The battery mount9accommodates a terminal base21and a controller22. The terminal base21is electrically connected to multiple battery cells encased in the battery pack10. The controller22is located above the terminal base21. The controller22includes a control circuit board23receiving, for example, a microcomputer and switching elements. A display panel24is located on the upper surface of the battery mount9. The display panel24is electrically connected to the control circuit board23. The display panel24displays the rotational speed of the brushless motor12and the remaining battery level of the battery pack10. The display panel24also allows other operations including switching the on-off state of the lamps. The brushless motor12is an inner-rotor motor including a stator25and a rotor26. The stator25includes a stator core27, insulators28, and coils29. The insulators28are on the front and the rear of the stator core27. The coils29are wound around the stator core27with the insulators28in between. The front insulator28receives a sensor circuit board30. The sensor circuit board30includes three rotation detectors (not shown). The three rotation detectors detect the position of a sensor permanent magnet34in the rotor26and output rotation detection signals. The rotor26includes a rotational shaft31, a cylindrical rotor core32, a permanent magnet33, and the sensor permanent magnet34. The rotational shaft31is aligned with the axis of the rotor26and extends in the front-rear direction. The permanent magnet33is cylindrical and surrounds the rotor core32. The sensor permanent magnet34is in front of the rotor core32. The rear cover5holds a bearing35in the center portion of its rear inner surface. The bearing35axially supports the rear end of the rotational shaft31. The rotational shaft31receives a fan36for cooling the motor in front of the bearing35. The rear cover5has multiple outlets37in its circumferential surface outward from the fan36. The motor housing7has multiple inlets38in its right and left side surfaces in front of the outlets37. A bearing box40is held in front of the brushless motor12in the motor housing7. The bearing box40is a disk having a stepped shape with a center portion protruding rearward. The motor housing7includes an engagement rib41on its inner surface. The engagement rib41is engaged with the bearing box40. The bearing box40receives the rotational shaft31through its center. The bearing box40holds a bearing42in its rear portion. The bearing42supports the rotational shaft31. The rotational shaft31receives a pinion43at its front end. As shown inFIG.3, the bearing box40includes an inner wall44on its outer circumference. The inner wall44is annular and extends frontward. The inner wall44has a thread on its outer circumferential surface. The hammer case6has an internal thread on its inner circumference at the rear. The inner wall44is screwed to the hammer case6. The hammer case6includes a projection45on its lower surface. The projection45is held between the left- and right-half housings4aand4b. The hammer case6is thus locked in a nonrotatable manner in the motor housing7. The hammer case6is also positioned in the front-rear direction with the engagement rib41. An internal gear46is held inside the inner wall44. The internal gear46forms the reduction assembly13. As shown inFIG.4, the internal gear46includes, on its outer circumferential surface, multiple protrusions47protruding frontward. The protrusions47are held between the inner wall44and the hammer case6. The hammer case6includes multiple recesses48on its inner circumferential surface. The recesses48are fitted with the respective protrusions47. As shown inFIG.5, the internal gear46is restricted from rotating by the protrusions47and the recesses48engaged with each other. An O-ring49is located inside the inner wall44. The O-ring49receives the rear end of the internal gear46. The hammer case6is cylindrical and tapered frontward. A bearing50is at the front end of the hammer case6. The bearing50supports the anvil16. The anvil16includes a pair of arms51behind the bearing50. A receiving ring52is on the inner wall of the hammer case6in front of the arms51. The receiving ring52receives the arms51. The spindle14is dividable into a shaft55at the front and a carrier56at the rear. The carrier56is hollow and disk-shaped. The carrier56includes, at its center, a cylindrical portion57that opens rearward. The cylindrical portion57is held in the bearing box40with the bearing58. The pinion43on the rotational shaft31protrudes into the cylindrical portion57. The carrier56includes three planetary gears59. The planetary gears59mesh with internal teeth on the internal gear46. The planetary gears59are rotatably supported by pins60. The planetary gears59mesh with the pinion43, forming the reduction assembly13. The carrier56has, at the center of its front surface, a cam projection61protruding frontward. The cam projection61protrudes into a rear portion of the shaft55. As shown inFIG.6, the cam projection61has three rear cam recesses62on its circumferential surface. The rear cam recesses62are cutouts on the front end of the cam projection61toward the rear. The rear cam recesses62each have an inner surface extending in the circumferential direction of the cam projection61and a bottom. The three rear cam recesses62are arranged at equal intervals in the circumferential direction of the cam projection61. The three rear cam recesses62receive three cam balls63. The cam balls63are restricted from moving outward in the radial direction of the cam projection61by expanded portions77of a cam75(described later), and are thus rollable circumferentially in the rear cam recesses62. The carrier56has a joint64around the cam projection61on its front surface. The joint64is annular and protrudes frontward concentrically with the cam projection61. The joint64has an outer recess65along its entire inner circumferential surface. The shaft55is a cylinder having an outer diameter smaller than the inner diameter of the joint64. The shaft55has its rear end between the cam projection61and the joint64. The shaft55has an inner recess66along its entire outer circumferential surface at the rear end. The inner recess66faces the outer recess65on the joint64. As shown inFIG.5, multiple connecting balls67are fitted in the outer recess65and in the inner recess66. The shaft55is thus prevented from slipping off the carrier56, and is also coaxially connected to the carrier56in a rotatable manner. The shaft55has a cam reception hole68that opens rearward. The cam reception hole68has a stepped-diameter including a front small diameter hole69and a rear large diameter hole70. The shaft55includes a flange71having a larger diameter than the joint64in front of the inner recess66. The cam reception hole68receives the cam75. The cam75includes a front shaft76and the expanded portions77. The front shaft76is placed into the small diameter hole69. The cam75includes three expanded portions77arranged circumferentially. The expanded portions77are placed into the large diameter hole70. As shown inFIG.7, the front shaft76has three inner grooves78on its outer circumferential surface. The inner grooves78extend in the front-rear direction. The three inner grooves78are arranged at equal intervals in the circumferential direction of the front shaft76. The small diameter hole69facing the inner grooves78has three outer grooves79on its inner circumferential surface. The outer grooves79extend frontward from the rear end of the small diameter hole69. Three coupling balls80are fitted in the inner grooves78and in the outer grooves79. The coupling balls80cause the cam75to be integrally coupled to the shaft55in the rotation direction. The cam75is movable relative to the shaft55in the front-rear direction within the range in which the coupling balls80roll back and forth in the inner grooves78and in the outer grooves79. The expanded portions77have three front cam recesses81on the rear ends. The front cam recesses81each have an arc shape recessing frontward. The front cam recesses81are fitted with the cam balls63placed in the rear cam recesses62from the front. Multiple disc springs82are externally mounted on the front shaft76. The disc springs82are arranged between the step at the front end of the large diameter hole70and the front surfaces of the expanded portions77, urging the cam75rearward. The front cam recesses81are engaged with the cam balls63under the urging force from the disc springs82. The rotation of the cam projection61is thus transmitted to the cam75. A hammer85is externally mounted on the shaft55. The hammer85includes a pair of tabs86on its front surface. The hammer85has a pair of outer cam grooves87on its inner circumferential surface. The outer cam grooves87extend rearward from the front end of the hammer85. The pair of outer cam grooves87are point-symmetric to each other about the axis of the hammer85. The shaft55has a pair of inner cam grooves88on its outer circumferential surface. The pair of inner cam grooves88are point-symmetric to each other about the axis of the shaft55. The pair of inner cam grooves88are each inverted V-shaped with the tip being the front. Two balls89are fitted in the outer cam grooves87and in the inner cam grooves88. With the balls89in between, the hammer85and the shaft55are coupled together in the rotation direction. The hammer85has an annular groove90on its rear surface. The groove90receives multiple spring balls91on its bottom. A washer92is behind the spring balls91. A coil spring93is externally mounted on the shaft55. The coil spring93is tapered to have a diameter gradually decreasing toward the rear. The rear end of the coil spring93is in contact with the flange71on the shaft55. The front end of the coil spring93is in contact with the washer92in the groove90. The hammer85includes a central cylindrical portion94that defines the inner circumferential surface of the groove90. Similarly to the coil spring93, the central cylindrical portion94is tapered to have a diameter gradually decreasing toward the rear. The central cylindrical portion94protrudes more rearward than the outer diameter portion of the hammer85that defines the outer circumferential surface of the groove90. The hammer85is thus urged to a forward position shown inFIGS.8A and8Bby the coil spring93. At the forward position, the balls89are at the rear ends of the outer cam grooves87and the tips of the inner cam grooves88. The shaft55has a fitting recess95in the center of its front end. The anvil16includes a fitting protrusion96at the center of its rear surface. The fitting protrusion96is fitted in the fitting recess95. The shaft55has an axial communication hole97. The communication hole97allows the fitting recess95and the cam reception hole68to communicate with each other. A receiving ball98is fitted to the front end of the communication hole97. The receiving ball98receives the rear end of the fitting protrusion96. The shaft55has a front grease supply hole99and a rear grease supply hole100. The front grease supply hole99communicates with the communication hole97between the inner cam grooves88and is open in the outer circumferential surface of the shaft55. The rear grease supply hole100communicates with the small diameter hole69in the cam reception hole68and one of the outer grooves79, and is open in the outer circumferential surface of the shaft55. The front grease supply hole99and the rear grease supply hole100are orthogonal to each other when viewed from the front. In the impact driver1according to the present embodiment, the trigger18is pressed to turn on the switch17after a bit (not shown) is attached to the anvil16. The brushless motor12is then powered to rotate the rotational shaft31. More specifically, the microcomputer in the control circuit board23receives, from the rotation detectors in the sensor circuit board30, rotation detection signals (rotation detection signals indicating the position of the sensor permanent magnet34in the rotor26), and determines the rotational state of the rotor26. The microcomputer then controls the on-off state of each switching element in accordance with the determined rotational state, and applies a current through the coils29in the stator25sequentially to rotate the rotor26. When the rotational shaft31rotates, the planetary gears59, which mesh with the pinion43, revolve in the internal gear46. This causes the carrier56to rotate at a lower speed. The rotation of the cam projection61integral with the carrier56is transmitted to the cam75through the cam balls63in between rolling to the circumferential ends of the rear cam recesses62, as indicated with the two-dot chain line inFIG.6. The rotation of the cam75is transmitted to the shaft55through the coupling balls80in between. The hammer85then rotates together with the shaft55with the balls89in between, thus rotating the anvil16with the arms51engaged with the tabs86. This allows tightening a screw with the bit. When the screw is tightened and increases the torque of the anvil16, the hammer85retracts against the urging force from the coil spring93while rolling the balls89along the corresponding inner cam grooves88on the shaft55. After the tabs86are disengaged from the arms51, the hammer85rotates forward along the inner cam grooves88under the urging force from the coil spring93. This then causes the tabs86to be re-engaged with the arms51, thus causing the anvil16to generate a rotational striking force (impact). This process is repeated for further tightening of the screw. When the screw is tightened in a high load state, the balls89may roll to the rear ends of the inner cam grooves88along with the retracting hammer85as shown inFIGS.9A and9B. This state is referred to as the hammer85at a rearmost position. In this state, the rear end of the central cylindrical portion94in the hammer85is not in contact with the flange71on the shaft55. When the rotational energy does not decrease with the hammer85at the rearmost position, the hammer85and the shaft55are urged to rotate further. Thus, the rotational energy of the shaft55exceeds the engagement force between the cam75and the cam projection61caused by the disc springs82. As shown inFIGS.10A and10B, the cam75integral with the shaft55in the rotation direction then rolls the cam balls63relatively to the circumferential ends of the front cam recesses81, compresses and deforms the disc springs82, and moves forward against the urging force from the disc springs82while rotating. The cam projection61and the shaft55may have a phase shift between them as the cam75moves forward and compresses and deforms the disc springs82. This can decrease the rotational energy. Thus, when the hammer85retracts to the rearmost position, a shock load is not transmitted to the carrier56. When the hammer85at the rearmost position starts moving forward under the urging force from the coil spring93, the cam75retracts under the urging force from the disc springs82to roll the cam balls63relatively to the circumferential centers of the front cam recesses81. This eliminates the phase shift between the cam projection61and the shaft55. The impact driver1according to the present embodiment includes the brushless motor12(motor), the carrier56including the planetary gears59(reduction assembly) and rotatable by the brushless motor12, and the shaft55to receive the rotation of the carrier56and rotatable relative to the carrier56in an overloaded state. The impact driver1further includes the hammer85held by the shaft55and the anvil16to be struck by the hammer85in the rotation direction. This structure allows the carrier56and the shaft55to rotate relative to each other in an overloaded state, thus absorbing the rotational energy. This effectively reduces durability deterioration caused by a shock load. This also decreases the urging force from the coil spring93, which urges the hammer85. Thus, the first impact occurs earlier during further screwing. This reduces the likelihood of camming out (the tip of the bit separates and slips out of the screw head). The shaft55extends frontward. The hammer85is held by the shaft55with the balls89in between. The balls89roll in the inner cam grooves88(cam grooves) on the outer circumferential surface of the shaft55. This causes the hammer85to be movable back and forth between the forward position at which the hammer85is engaged with the anvil16in the rotation direction and a rearward position at which the hammer85is disengaged from the anvil16in the rotation direction. The hammer85is urged to the forward position by the coil spring93externally mounted on the shaft55. The shaft55rotates relative to the carrier56in response to an overload occurring at the rearward position for the hammer85at which the balls89reach the rearmost ends of the inner cam grooves88. The structure of the spindle14dividable into the shaft55and the carrier56allows the relative rotation in an overloaded state. A cam assembly (the cam projection61, the cam75, and the disc springs82) is located between the carrier56and the shaft55. The cam assembly transmits the rotation of the carrier56to the shaft55and rotates the carrier56and the shaft55relative to each other in the overloaded state of the shaft55. Thus, the carrier56and the shaft55are easily rotated relative to each other with the cam assembly. The cam assembly includes the cam projection61protruding frontward from the center of the carrier56, the cam75coupled to the shaft55in a manner rotatable together with the shaft55and movable back and forth relative to the shaft55, and the disc springs82(urging members) to urge the cam75to a rearward position. The cam75is engageable with the cam projection61at the rearward position to transmit the rotation of the carrier56to the shaft55, and rotates the carrier56and the shaft55relative to each other at the forward position. This structure transforms a shock load from the hammer85at the rearmost position into deformation of the disc springs82, thus effectively reducing the rotational energy. The cam projection61and the cam75are engaged with each other with the cam balls63in between. The cam projection61and the cam75transmit the rotation of the carrier56to the shaft55. Thus, the rotation of the carrier56is smoothly transmitted to the cam75. The cam projection61includes the rear cam recesses62holding the cam balls63on its outer circumferential surface. The cam75includes, on its rear end, the front cam recesses81engaged with the cam balls63. This facilitates transmission of the rotation from the cam projection61to the cam75as well as deformation of the disc springs82as the cam75moves forward. The structure includes the three cam balls63, the three rear cam recesses62, and the three front cam recesses81. This allows transmission of the rotation from the cam projection61to the cam75as well as deformation of the disc springs82in a well-balanced manner as the cam75moves forward. The cam75is coupled to the shaft55with the coupling balls80in a manner rotatable together with the shaft55and movable back and forth relative to the shaft55. This reliably allows switching between transmission of the rotation from the cam75to the shaft55and relative rotation. The shaft55is cylindrical and has the rear end with an opening. The cam75and the disc springs82are accommodated in the shaft55. Thus, the cam assembly can be located in a small space using the shaft55. The shaft55internally has the cam reception hole68including a rear portion with a larger diameter than a front portion. The cam75is a shaft having a stepped-diameter including the front shaft76(smaller-diameter portion) placed in the front portion of the cam reception hole68and the expanded portions77(larger-diameter portions) placed in the rear portion of the cam reception hole68. The urging members include the multiple disc springs82externally mounted on the front shaft76. Thus, the urging members can be included in a small space in the shaft55. The shaft55receives the cam projection61in its rear end and is coupled to the carrier56at its rear end in a rotatable manner. Thus, the shaft55and the carrier56can be integrated into the dividable spindle14in a space-saving manner. The carrier56includes, on its front surface, the joint64that is annular and concentric with the cam projection61. The shaft55is connected to the inner surface of the joint64at its rear end in a rotatable manner. Thus, the shaft55can be easily connected using the joint64. The joint64and the rear end of the shaft55are connected to each other with the multiple connecting balls67arranged in the circumferential direction of the joint64and the shaft55. Thus, the shaft55and the carrier56, which are rotatable relative to each other, can be reliably connected. The shaft55includes the flange71receiving the rear end of the coil spring93. This allows the coil spring93and the shaft55to rotate together. Modifications will now be described. In the embodiment, the carrier includes the cam projection and the cam includes the expanded portion covering the cam projection. In some embodiments, the cam may include the cam projection in its rear portion and the carrier may include the expanded portion covering the cam projection on its front surface. The structure may include more or fewer front cam recesses, rear cam recesses, and balls than in the illustrated example. The number of disc springs to urge the cam may be changed as appropriate. The urging members may be, for example, coil springs other than disc springs. The structure may include more or fewer inner grooves, outer grooves, and balls to couple the shaft and the cam than in the illustrated example. The shaft and the cam may be key-coupled or splined, without using the balls. The reduction assembly may include more or fewer planetary gears than in the illustrated example. The motor is not limited to a brushless motor. The power source is not limited to a battery pack but may be utility power. The present disclosure is also applicable to impact tools other than an impact drive, such as an angle impact driver. REFERENCE SIGNS LIST 1impact driver2body3grip4body housing6hammer case12brushless motor13reduction assembly14spindle15striking assembly16anvil22controller31rotational shaft43pinion55shaft56carrier59planetary gear61cam projection62rear cam recess63cam ball64joint67connecting ball68cam reception hole71flange75cam76front shaft77expanded portion81front cam recess82disc spring85hammer89ball93coil spring | 24,905 |
11858095 | DETAILED DESCRIPTION The following detailed description is of the best currently contemplated modes of carrying out the invention. The description is not to be taken in a limiting sense but is made merely for the purpose of illustrating the general principles of the invention. Overview Embodiments of the invention are directed to a universal multifunctional skateboard and surfboard tool. The universal multifunctional skateboard and surfboard tool as herein described may be primarily structured and sized for use on skateboards and surfboards, such as for adjusting the wheel mount nuts and wheel-trucks and associated fasteners of skateboards and adjusting the fins, decks, and leash of a surfboard. Although the present invention is described for use with skateboards and surfboards, this is by way of example only and the present tool can also be used on other equipment including, but not limited to, in-line skates, bicycles, and snowboards. The universal multifunctional tool is small, lightweight, and compact enough for an individual to carry the tool while skateboarding or surfing so the tool is readily available to make repairs and adjustments to the equipment. That is, the portability of the tool allows for on-the-fly repair of equipment without having to return to a different location for retrieving the tool. Furthermore, the structure or configuration of the tool provides portability allowing the tool to be carried in a pocket so that the user is unlikely to become injured by the tool should he or she fall. For example, the tool may be carried in the watch pocket of a pair of jeans or other pocket of a piece of clothing such as the pocket of a shirt, coat, or jacket. The present universal multifunctional tool, when partly disassembled or reconfigured from its stored state, allows for convenient pipe smoking whenever one wishes. Skateboard and Surfboard Tool FIG.1is a perspective view of a skateboard and surfboard tool of the present invention.FIG.2is a right-side elevation view of the skateboard and surfboard tool ofFIG.1.FIG.3is a left-side elevation view of the skateboard and surfboard tool ofFIG.1.FIG.4is a front-end view of the skateboard and surfboard tool ofFIG.1.FIG.5is a rear end view of the skateboard and surfboard tool ofFIG.1.FIG.6is a top plan view of the skateboard and surfboard tool ofFIG.1.FIG.7is a bottom plan view of the skateboard and surfboard tool ofFIG.1.FIG.8is a perspective view of the skateboard and surfboard tool ofFIG.1showing an adjustment member separated from a housing.FIG.9is a cross-sectional view taken along line9-9inFIG.1.FIG.10is a cross-sectional view of the skateboard and surfboard tool showing the flow of air when in use as a pipe for smoking tobacco products.FIG.11is a cross-sectional view of the skateboard and surfboard tool showing the adjustment member extending outward.FIG.12is a top plan view of skateboard and surfboard tool according to another embodiment of the present invention.FIG.13is a cross-sectional view taken along line13-13inFIG.12. The following discussion refers interchangeably toFIGS.1-13. As shown, the universal multifunctional skateboard and surfboard tool100includes a housing101adapted to receive an adjustment member122. The housing101comprises an elongated cylindrical member102integrally connected between a first socket head108, at a distal end of the tool100, and a second socket head112at a proximal end of the tool100. The elongated cylindrical member102is defined by an annular sidewall having a first elongated cylindrical member end104integrally connected to the first socket head108and an opposing second elongated cylindrical member end106integrally connected to the second socket head112. The first socket head108includes an annular sidewall defining a first hexagonal socket110. The second socket head112includes an annular sidewall defining a second hexagonal socket116at a first vertical end and a third hexagonal socket118at a second vertical end, where the second and third second hexagonal sockets116,118are located within the same vertical plane. The third hexagonal socket is located in the same vertical plane as the second hexagonal socket where the second horizontal socket is located at a first vertical end of the vertical plane and the third horizontal socket is located at an opposing second vertical end of the vertical plane. According to one aspect, the diameter of the first hexagonal socket110is smaller than the diameter of the second hexagonal socket116while the diameter of the second hexagonal socket116is smaller than the diameter of the third hexagonal socket118. Each of the annular sidewalls of the first, second and third hexagonal sockets110,116,118extend at a right angle to the lengthwise axis of the elongated cylindrical member102and are for use in manipulating nuts and bolt heads. The hexagonal sockets may be any size, including but not limited to, a ⅜th, ½, 9/16th, and 3/32. While the socket heads illustrated in the figures have generally cylindrical outer contours and have smooth outer surfaces, the socket heads may have any suitable outer shape, including, but not limited to, square, hexagonal, octagonal, and rectangular. In one embodiment, the housing101of the multifunctional skateboard and surfboard tool100may include a cavity120extending horizontally through the first socket head108, the elongated cylindrical member102, and the second socket head112. The cavity120extends from a first hole121located in an outer surface of the annular side wall of the first socket head108and terminates at a second hole123located in an outer surface of the annular sidewall of the third hexagonal socket118of the second socket head112. According to one aspect, the first and second holes121,123may be located in the same horizontal plane allowing for the adjustment member122to be received within the cavity120. When the adjustment member122is fully received within the cavity120of the housing101, the universal multifunctional skateboard and surfboard tool100is in an assembled configuration. When in the assembled configuration, the universal multifunctional skateboard and surfboard tool100is light-weight, compact, and portable allowing for the universal multifunctional skateboard and surfboard tool100to be safely stored in a pocket of clothing, for example, so that the person is unlikely to become injured by the universal multifunctional skateboard and surfboard tool100should he or she fall during an event. As described in more detail below, when the universal multifunctional skateboard and surfboard tool100is in a disassembled configuration from its stored state, the housing101allows for convenient pipe smoking whenever one wishes. In one embodiment, the adjustment member122may be a combination of an Allen® key on one end and a Phillips® head on the other end. As shown, the adjustment member122may comprise a first arm124, terminating in a Phillips® head126, integrally connected to a second arm128, terminating in an Allen® key130, where the first arm124is perpendicular to the second arm128. (SeeFIGS.5-7) Alternatively, the first arm124may terminate in an Allen® key and the second arm128may terminate in a Phillips® head. In another embodiment, both the first arm124and the second arm128may terminate in Philips heads or both may terminate in Allen® keys (seeFIG.8). The first arm may be longer124than the second arm124. According to one aspect, the adjustment member122may utilize standard Phillips® and/or Allen® key heads allowing for the user to easily replace the adjustment member122if lost or damaged with an Off the shelf adjustment member122that may be found at any local hardware store. The Allen® key heads may be any size Allen® key head, including but not limited to, ⅛thand 3/32nd. One or more magnets may be utilized to detachably secure the adjustment member122within the housing of the universal multifunctional skateboard and surfboard tool100. As shown inFIGS.1,6, and8, a magnet132may be located on the outer surface of the annular sidewall of the second socket head112above the second hole123, where the second hole123is adapted to receive the adjustment member122. When the universal multifunctional skateboard and surfboard tool100is in the fully assembled configuration, the adjustment member122is fully received within the cavity120of the housing101such that the second arm128of the adjustment member122is detachably connected to the magnet132, and the first arm124extends through the cavity120and protrudes through the first hole121in the first socket head108beyond the distal end of housing101. To release the adjustment member122from the housing101, the user presses or pushes the protruding portion of the adjustment member122inward along the axial direction of the cavity120causing the adjustment member122to be released from the magnet132allowing the adjustment member122to be removed from the cavity120of the housing101. As mentioned previously, when in a disassembled configuration and the adjustment member is removed from the housing101, the housing101may be used as a pipe to smoke tobacco.FIG.10is a cross-sectional view of the skateboard and surfboard tool showing the flow of air when in use as a pipe for smoking tobacco products. As shown inFIG.10, the first socket head108serves as a mouthpiece end or mouth end to be engaged by the smoker's lips when housing101is used to smoke tobacco. Tobacco may be placed in the bowl of the second hexagonal socket116and when lit, the smoke and air from the burning tobacco may be drawn into the cavity120by the smoker when the housing101. Air may be drawn into the cavity from a socket hole140in the base of the second hexagonal socket116and the second hole123. When in the disassembled configuration, a user may insert the end of the first arm of the adjustment member into the first hole121in the first socket head108.FIG.11is a cross-sectional view of the skateboard and surfboard tool showing the adjustment member extending outward from the distal end of the housing101. As shown, the end of the first arm of the adjustment member122is inserted into the cavity120via the first hole121until the end of the adjustment member extends through the first hexagonal socket110and is engaged on the opposite side. By configuring the tool as shown inFIG.11, the housing101may be used as an extension to the adjustment member122providing the user with additional leverage when making adjustments. Furthermore, the extension allows for the tool to rotate on the end of the axis without touching or interfering with the deck of the skateboard. FIG.12is a top plan view of the universal multifunctional skateboard and surfboard tool according to another embodiment of the present invention.FIG.13is a cross-sectional view taken along line13-13inFIG.12. As described above with reference toFIG.10, the first socket head108serves as a mouthpiece end or mouth end to be engaged by the smoker's lips when housing101is used to smoke tobacco. Tobacco may be placed in the bowl of the second hexagonal socket116and when lit, the smoke and air from the burning tobacco may be drawn into the cavity by the smoker when the housing101is being used as a pipe to smoke tobacco. The base144of the second hexagonal socket116may be sloped or tapered to one side creating a funnel inside the second hexagonal socket116so that air may be drawn into the cavity from an offset socket hole142in the base116of the second hexagonal socket116and the second hole123. That is, the offset socket hole142is offset from the center of the base of the second hexagonal socket116. By offsetting the offset socket hole142in the base116of the second hexagonal socket116, a smoker may utilize a finger over the first hole121acting as a carburetor regulating the flow of air through the cavity120and directing the heat away from the fingers of the smoker. CONCLUSION Within the present disclosure, the word “exemplary” is used to mean “serving as an example, instance, or illustration.” Any implementation or aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term “aspects” does not require that all aspects of the disclosure include the discussed feature, advantage, or mode of operation. The term “coupled” is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another-even if they do not directly physically touch each other. For instance, a first object may be coupled to a second object even though the first object is never directly physically in contact with the second object. The terms “at least one” and “one or more” may be used interchangeably herein. Within the present disclosure, use of the construct “A and/or B” may mean “A or B or A and B” and may alternatively be expressed as “A, B, or a combination thereof” or “A, B, or both”. Within the present disclosure, use of the construct “A, B, and/or C” may mean “A or B or C, or any combination thereof” and may alternatively be expressed as “A, B, C, or any combination thereof”. One or more of the components, steps, features and/or functions illustrated herein may be rearranged and/or combined into a single component, step, feature, or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein. The apparatus, devices, and/or components illustrated herein may be configured to perform one or more of the methods, features, or steps described herein. The novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. A phrase referring to “at least one of:” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” While the foregoing disclosure shows illustrative aspects, it should be noted that various changes and modifications could be made herein without departing from the scope of the appended claims. The functions, steps or actions of the method claims in accordance with aspects described herein need not be performed in any particular order unless expressly stated otherwise. Furthermore, although elements may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. | 16,034 |
11858096 | DETAILED DESCRIPTION In the following disclosure, various specific embodiments are described. It is to be understood that those embodiments are example implementations of the disclosed inventions and that alternative embodiments are possible. Such alternative embodiments include hybrid embodiments that include features from different disclosed embodiments. All such embodiments are intended to fall within the scope of this disclosure. Definitions The following terms, as used in this disclosure, have the following definitions: “Preload”: The tensile force in a threaded fastener and the equivalent compressive force in the joint members when there is no applied load. “Tightening torque”: The torque applied with a torque tool to a threaded fastener to achieve preload in a bolted joint. The tightening torque is the amount of torque required to overcome thread and nut friction plus the torque required to stretch the bolt to achieve preload. “Removal torque”: The torque applied with a torque tool to initiate removal or disassembly of a threaded fastener in a bolted joint. The removal torque is the amount of torque required to overcome thread and nut friction minus the torque from bolt stretch. “Torque difference”: The difference between an applied tightening torque and an applied removal torque. “Retightening torque”: The torque applied with a torque tool to a threaded fastener to achieve preload in a bolted joint after the tightening torque and the removal torque have been applied to the fastener. Retightening torque is the amount of torque required to overcome thread and nut friction plus the torque required to stretch the bolt to achieve preload after the tightening torque and the removal torque have been applied to the fastener. “Torque tool”: A device, including torque wrenches, that are configured to apply and measure torque to a threaded fastener. INTRODUCTION Nominal preload, Fp, is defined in terms of the installation tightening torque, Tt, as Fp=TtkD(1) In this equation, k is the nut factor determined from torque-tension data and D is the nominal bolt diameter. The minimum and maximum preload are determined from the nominal preload, Fp, and the uncertainty, Γ, Fp min=(1−Γ)Fp Fp max=(1+Γ)Fp(2) The uncertainty as to the preload is typically specified as +/−35% for unlubricated fasteners and +/−25% for lubricated fasteners, or is determined from torque-tension data. Equations for minimum and maximum preload often include additional parameters to account for preload variation due to relaxation, creep, and temperature change. As an example, assume a designer specifies a tightening torque of 60 in-lb for fasteners without lubricant to achieve a nominal preload of 900 lb with an uncertainty of +/−35%. This means that a tightening torque of 60 in-lb will provide a preload between the minimum preload of 585 lb (i.e., 900 minus 35%) and the maximum preload of 1,215 lb (900 plus 35%). As part of the design process, one must check that the minimum preload is sufficient to hold the joint together and that the maximum preload is not so large as to damage the fasteners or joint. This example illustrates why some designers use a nominal preload of 65% of the yield strength of the bolt. Specifically, the maximum preload from nominal preload plus 35% uncertainty is at 100% yield strength of the bolt. Less common methods to achieve preload are based on control of turn-angle or bolt stretch. These methods provide reduced uncertainty, but require the use of less common and more expensive equipment than a typical torque wrench, as well as additional training, technique, and preparation. The uncertainty in preload when using turn-angle measurement for preload is specified as +/−25% or as determined from turn-angle testing of sample hardware and statistical analysis. Bolt-stretch measurement is performed using special calipers or micrometers, ultrasound equipment, or strain gages. Each of these devices requires additional preparation of the fasteners, such as machining bolt ends or strain gauge attachment and instrumentation. The uncertainty in preload when using the bolt-stretch measurement for preload is specified as +/−10% or as determined from testing of sample hardware and statistical analysis. Of the three methods described above, tightening torque has the advantage of only requiring common tools, techniques, and training to implement, but has the highest uncertainty. While the turn-angle and bolt-stretch methods provide reduced uncertainty, they are generally more expensive and require additional tools, preparation, techniques, and training. Disclosed in this application is a novel approach to achieving a desired preload. Instead of using tightening torque alone with existing torque-tension data to achieve preload, the disclosed approach uses thread pitch along with tightening torque and removal torque measurements at the time of installation as preload is proportional to the tightening torque minus the removal torque. Accordingly, no existing torque-tension data is needed in the disclosed approach. Instead, the approach is based on simple torque equations. These equations and test data are provided below. The test data establishes that the disclosed approach produces results with less uncertainty as compared to using tightening torque alone. Determining Preload from Thread Pitch and Torque Difference The tightening torque for a threaded fastener in a bolted joint can be mathematically defined as Tt=Fp(p2π+μtrtcosβ+μnrn)(3) In this equation, Ttis the tightening torque, Fpis the preload, p is the thread pitch, μtis the thread interface friction coefficient, rtis the nominal thread interface radius, β is the thread half angle, μnis the nut face friction coefficient, and r n is the nominal nut face radius. The first term in the parentheses of the equation is the torque required to stretch the bolt, and the remaining two terms are the torque required to overcome thread and nut friction, respectively. Using torque-tension data, tightening torque is typically specified and applied with a torque wrench to achieve desired preload. Equation 3 provides a relation between preload and tightening torque. The two coefficients of friction are the most difficult to estimate. Friction introduces the most uncertainty in this relation and in torque-tension data. In practice, this translates to uncertainty in preload for a specified tightening torque. Similarly, the removal torque for a threaded fastener in a bolted joint can be mathematically defined as Tr=Fp(-p2π+μtrtcosβ+μnrn)(4) This removal torque is the torque required to overcome thread and nut friction minus the torque from bolt stretch and, therefore, is the torque applied with a torque wrench or other torque tool to initiate removal or disassembly of a threaded fastener in a bolted joint. A relation between preload and tightening torque minus removal torque is defined by subtracting Equation 4 from Equation 3 and solving for preload Fp=πp(Tt-Tr)(5) This result reveals that preload can be determined from thread pitch, tightening torque, and removal torque alone. In other words, preload can be determined given nothing more than the know thread pitch and a torque tool. For threaded fasteners with a prevailing torque locking feature (e.g., lock nuts and locking inserts), the tightening torque and removal torque equations become Tt=Fp(p2π+μtrtcosβ+μnrn)+Tpv(6)andTr=Fp(-p2π+μtrtcosβ+μnrn)+Tpv(7) In these equations, Tpvis the prevailing torque, which is independent of preload Fpand adds to both the tightening torque and removal torque. As before, the equation for preload is determined by subtracting removal torque, (Eq. 7), from tightening torque (Eq. 6), and solving for preload to obtain Equation 5. Preload, in terms of tightening torque minus removal torque, is independent of the prevailing torque, provided that the prevailing torque is the same in both directions. Determining Self-Loosening and Primary Locking The negative term in the removal torque equation defines the inherent self-loosening of a fastener. Such self-loosening results from the bolt-stretch torque and the associated potential energy within the bolt. The bolt-stretch technique is inherent to the threaded fastener and is proportional to preload and thread pitch. Self-loosening can be defined in terms of tightening torque and removal torque by subtracting Equation 4 from Equation 3 and dividing by 2 Fpp2π=12(Tt-Tr)(8) The significance of this equation is that self-loosening can be quantified using measurements of tightening torque and removal torque, instead of requiring a measurement of preload. This equation for self-loosening also applies to fasteners with prevailing torque locking, as shown by subtracting Equation 7 from Equation 6 and dividing by 2. The friction terms in the tightening torque and removal torque equations define the primary locking in a bolted joint. This primary locking is dependent on preload and friction. The primary locking is defined in terms of tightening torque and removal torque by adding Equations 3 and 4 and dividing by 2 Fp(μtrtcosβ+μnrn)=12(Tt+Tr)(9) The significance of the above equation is that primary locking can be quantified using measurements of tightening torque and removal torque, instead of requiring estimates of thread interface and nut face coefficients of friction. For threaded fasteners with a prevailing torque locking feature, such as a locking nut, the primary locking is determined by adding Equations 6 and 7 and solving for primary locking Fp(utrtcosβ+μnrn)=12(Tt+Tr)-Tpv(10) This equation reveals that, even with a prevailing torque locking feature, the primary locking can be quantified with measurements of tightening, removal, and prevailing torque. Uncertainty Existing methods of using tightening torque control to achieve preload are frequently used but result in significant uncertainty in preload. A primary source for this uncertainty is from thread-interface friction and nut-face friction (or bolt-face friction when using inserts). Uncertainty exists within test samples in torque-tension tests due to variations in test fasteners and interface conditions. Additional variation, not only in fastener and interface conditions but also in tools, technique, and technician, is introduced for actual fastener hardware used at installation compared to samples used in torque-tension testing. Existing methods also require previously obtained torque-tension data to specify tightening torque at installation and a specification or previous data for uncertainty. The disclosed approach, on the other hand, uses both tightening torque and removal torque to determine preload. Preload is determined from the thread pitch, the tightening torque, and the removal torque measured at installation. No previous torque-tension data is needed for a nut factor to relate tightening torque to preload. Preload can be determined given nothing more than thread pitch and a torque tool to measure tightening torque and removal torque at installation. Since preload is determined by subtracting removal torque from tightening torque, the thread interface friction and nut face friction are canceled out, as is apparent from Equations 3 to 5. However, subtracting quantities with uncertainty does not usually subtract out the uncertainty. Regardless, uncertainty in preload from torque difference (i.e., tightening minus removal torque) is expected to be less than the uncertainty in preload from tightening torque for two reasons, which are described below. First, preload from tightening torque alone is based on torque-tension data previously obtained with representative sample fasteners and interface conditions using a different technician, tools, and possibly technique than used at installation. Uncertainty results from reproducibility within these previously tested sample fasteners as well as the actual fastener at installation. Preload from torque difference as described herein requires no previous data. Instead, preload is determined based on torque measurements at installation for the actual fastener, interface conditions (including the state of coatings, cleanliness, and lubricant), technician, technique, and tools. Uncertainty results from repeatability (i.e., more than one torque measurement on the actual fastener) rather than reproducibility. Second, torque wrench accuracy and repeatability are relevant for preload from torque, whereas only torque wrench repeatability is relevant for preload from torque difference. Calibration certificate data generally show accuracy of 2 to 4% compared to repeatability of less than 1%. Testing Testing was performed to confirm the above-described relationship between preload and thread pitch, tightening torque, and removal torque. The fasteners used in this testing included cadmium plated AN4-14A bolts (0.25-28, 1.53″ long), AN315-4 full hex nuts, AN365-428A elastic stop lock nuts, AN363-428 all-metal lock nuts, and AN960-416 washers, as well as stainless steel AN4C14A bolts, AN315C4R full hex nuts, AN363C428 all-metal lock nuts, and AN960C416 washers. Test equipment and a fixture were assembled for tests to be performed with a bolt and a nut. A washer was used under both the bolt head and the nut. The fixture integrated a strain gage through-hole load cell for preload measurement. The load cell was rated for 2,000 lb and had a height of 0.63″, an outer diameter of 2.0″, and a loading diameter of 0.88″. The accuracy was within 1% and repeatability is within +/−0.1% full scale. The fixture included a cone-shaped component made of A286 steel that was 0.50″ tall with a 1.1″ top outer diameter, a 2.1″ bottom outer diameter, and a clearance hole for 0.25-28 bolt. Tests with stainless steel Helicoil 1191-4CN375 nonlocking inserts and 3591-4CN375 locking inserts were performed with both AN4-14A and AN4C14A bolts. The fixture also included an additional cylindrical bottom component made of A286 steel that was 1.1″ tall with a 1.3″ outer diameter. The bottom component was drilled and tapped for 0.25-28 Helicoil insert installation. A washer was used under the bolt head. Tightening torque and removal torque were measured with a dial-type torque wrench. The torque wrench range was 15-75 in-lb (20-100% full scale) in increments of 1 in-lb. The accuracy was within +/−4% of torque reading in both directions, however, the actual repeatability from certificate of calibration was within +1-1% of torque reading in both directions. For example, the repeatability at 60 in-lb was within +/−0.6 in-lb. For purposes of these tests, repeatability is more relevant than accuracy because torque difference instead of absolute torque is used to determine preload. A dial-type torque wrench with a range of 6-30 in-lb (20-100% full scale) in increments of 0.5 in-lb was used for measuring prevailing torque in tests with lock nuts and locking inserts. The accuracy was +/−4% and repeatability was within +/−1% of torque reading in both directions. All tests were performed with fasteners in their as-received condition. In other words, the fasteners were removed from manufacturer packaging and used without solvent cleaning or the addition of lubricant. Since the recommended range for tightening torque of AN-4 bolts is 50-70 in-lb, each of the test fasteners were assembled using a tightening torque of 60 in-lb plus any measured prevailing torque for lock nuts or locking inserts. Bolt with Nut Test Data Table 1 presents sample data from tests with an AN4C14A bolt and an AN315C4R full hex nut having known thread pitches. Each row in the table shows measured data and calculations for a test. For each test, a tightening torque of 60 in-lb was first applied to the nut and the resulting preload was measured from the load cell. Then, a removal torque was applied to the nut in the opposite direction and measured. The preload was calculated based upon the torque difference (i.e., tightening torque minus removal torque) according to Equation 5 and the percent difference between the measured and computed preload was calculated. Self-loosening was calculated using Equation 8, first from measured preload and then from measured torque difference. Finally, the primary locking was calculated from the torque sum using Equation 9. This was performed for a total of 20 (1 initial use and 19 reuse) tests for the same bolt and nut. While the nut was not removed between tests, the measured preload was reduced to zero before each reuse. TABLE 1Measurements and calculations for AN4C14A boltwith AN315C4R full hex nut.CalculationsMeasurementsπ(Tt− Tr)/Fpp/(Tt− Tr)/(Tt+ Tr)/TtFpTrp%2π22(in-lb)(lb)(in-lb)(lb)diff(in-lb)(in-lb)(in-lb)601030499686.15.95.554.5609005179212.05.14.555.560860517927.94.94.555.56080050880−10.04.55.055.060795517920.44.54.555.560750527046.24.34.056.06074051792−7.04.24.555.560770527048.64.44.056.060750527046.24.34.056.060740527044.94.24.056.060750527046.24.34.056.06074051792−7.04.24.555.56073051792−8.44.14.555.56072051792−10.04.14.555.56074051792−7.04.24.555.560725527042.94.14.056.06072051792−10.04.14.555.560730527043.64.14.056.060715527041.64.14.056.060725527042.94.14.056.0 The data of Table 1 reveals that the preload drops with reuse even with the same tightening torque. This is due to an increase in friction at the thread interface and nut face with reuse. This is supported by the computed increase in primary locking with reuse and was not unexpected since the fasteners used are as-received with no added lubricant. The self-loosening computed using the measured preload compares well with the self-loosening computed using the torque difference, even with significant reuse. This is evidence of the validity of the disclosed concept of using one-half the tightening minus removal torque for determining self-loosening. The preload determined using the thread pitch and torque difference compares well with the measured preload with a maximum percent difference of 12%. It is noted that, even though the applied tightening torque was constant at 60 in-lb, the measured removal torque increased with reuse and the corresponding computed preload decreased with reuse.FIG.1plots the measured preload against tightening torque for all 20 tests. As shown in this figure, the spread in measured preload with tightening torque was 315 lb.FIG.2plots the measured and computed preload against the applied tightening torque minus removal torque for all 20 tests. The spread in the measured preload for a given torque difference was less than the spread in measured preload with tightening torque alone. Furthermore, the difference between measured preload and computed preload inFIG.2was much less than the spread in measured preload with tightening torque inFIG.1. This is also evidence of the validity of the concept of using of thread pitch and torque difference for calculating preload as defined in Equation 5. One issue with the disclosed approach is that the process of measuring removal torque removes preload. The solution to this issue is to retighten the nut to the tightening torque. Therefore, the process required to use tightening torque and removal torque to determine preload can include retightening of the nut to the tightening torque. Table 2 presents sample data that includes retightening to the tightening torque and measuring preload after this retightening. The process of tightening, removing the applied torque, and retightening was performed 19 times with the same bolt and nut. Each row in Table 2 presents these torque measurements, the preload measurement after retightening, and calculations of preload, percent difference, self-loosening, and primary locking for the process. The percent difference shown in the table is the difference between the measured preload after retightening and the computed preload from the torque difference. TABLE 2Measurements and calculations for AN4C14A boltwith AN315C4R full hex nut with retightening.MeasurementsCalculationsπ(Tr− Tr)/Fpp/(Tt− Tr)/(Tt+ Tr)/TtTrTtFpp%2π22(in-lb)(in-lb)(in-lb)(lb)(lb)diff(in-lb)(in-lb)(in-lb)604960900968−7.55.15.554.56051608607927.94.94.555.56051608007921.04.54.555.5605060795880−10.64.55.055.0605160750792−5.64.34.555.56052607407044.94.24.056.0605160770792−2.84.44.555.56052607507046.24.34.056.06052607407044.94.24.056.06052607507046.24.34.056.06052607407044.94.24.056.0605160730792−8.44.14.555.5605160720792−10.04.14.555.5605160740792−7.04.24.555.5605160725792−9.24.14.555.56052607207042.34.14.056.0605160730792−8.44.14.555.56052607157041.64.14.056.06052607257042.94.14.056.0 The preload determined using the thread pitch and the torque difference compares well with the measured preload after retightening with a maximum percent difference of −10.6%.FIG.3plots the measured preload after retightening against the tightening torque for all 19 preload measurements. The spread in measured preload is 185 lb.FIG.4plots this measured preload and the corresponding computed preload against the tightening torque minus the removal torque. The difference between computed preload and measured preload after retightening shown inFIG.4is much less than the spread in measured preload after retightening with tightening torque shown inFIG.3. This provides evidence of the validity of the concept of using thread pitch and measured tightening minus removal torque for calculating preload as defined in Equation 5 with retightening to tightening torque included in the process. As an example of using this approach, consider a 0.25-28 bolt and nut installed with a torque wrench to a tightening torque of 60 in-lb. The torque wrench is used to measure the removal torque of 49 in-lb and then used to retighten to 60 in-lb. The preload is calculated using Equation 5 Fp=πp(Tt-Tr)=π(1/28)(60-49)=968lb.(11) Since the data used in this example is from row 1 of Table 2, the measured preload from retightening is 900 lb. The percent difference between measured and computed preload is −7.5%. From Equation 8, the self-loosening computed from the measured torque difference compares well to the self-loosening computed from the measured preload after retightening. 12(Tt-Tr)=12(60-49)=5.5in·lb(12)Fpp2π=(900)(1/28)2π=5.1in·lb From Equation 9, the primary locking is computed from the torque sum as Fp(μtrtcosβ+μnrn)=12(Tt+Tr)=12(60+49)=54.5in·lb.(13) Bolt with Lock Nut Test Data Table 3 presents sample data from tests with an AN4-14A bolt and an AN365-428A elastic stop lock nut. In these tests, the 6-30 in-lb range torque wrench was used on the lock nut to measure the “on” prevailing torque in the tightening direction once the locking feature was fully engaged with zero preload. The 15-75 in-lb range torque wrench was then used on the lock nut to apply a tightening torque of 60 in-lb plus the measured prevailing torque. For example, if the measured prevailing torque was 15 in-lb, then the tightening torque would be 75 in-lb. Next, torque was applied to the lock nut in the opposite direction to measure the removal torque. The lock nut was turned until there was zero preload with full engagement of the locking feature, then the “off” prevailing torque was measured. The lock nut was retightened, the “on” prevailing torque was measured again, and then a tightening torque of 60 in-lb plus the prevailing torque was applied. The preload was measured with the load cell after retightening. The process of tightening, loosening (i.e., applying a removal torque), and retightening was performed 11 times with the same bolt and lock nut. Each row in Table 3 presents torque measurements, preload measurements after retightening, and calculations of preload, percent difference, self-loosening, and primary locking for the process. The percent difference shown in the table is the difference between the measured preload after retightening and the computed preload from the torque difference. TABLE 3Measurements and calculations for AN4-14A bolt with AN365-428A lock nut.MeasurementsCalculations(Tt+ Tr)/Tpv onTtTrTpv offTpv onTtFpπ(Tt− Tr)/p%Fpp/2π(Tt− Tr)/22 − Tpv(in-lb)(in-lb)(in-lb)(in-lb)(in-lb)(in-lb)(lb)(lb)diff(in-lb)(in-lb)(in-lb)15756412147497988010.15.65.055.0147462121373940968−2.95.35.554.513736312127289379211.35.14.555.5127262121272852880−3.24.85.055.0127262121272850880−3.54.85.055.0127263121373809880−8.74.65.055.01373631312728247923.94.74.555.5127263121272751792−5.44.34.555.51272641212727367044.44.24.056.0127264121272667704−5.53.84.056.0127264121272681704−3.33.94.056.0 This data shows a decrease in “on” prevailing torque with reuse. This is an expected result with lock nuts. The “off” prevailing torque is less than the “on” prevailing torque for the initial use and first couple reuses, but then is consistent in both directions. If these torques are significantly different in the two directions, adjustments should be made in calculating preload and self-loosening. The tightening torque was 60 in-lb plus the “on” prevailing torque. As with the previous data, the preload dropped with reuse due to an increase in friction at the thread interface and nut face, which is supported by the computed increase in primary locking with reuse. This was not unexpected since the fasteners are used as-received with no added lubricant. The self-loosening computed with the disclosed torque difference compares well with the self-loosening computed with the measured preload after retightening. This is evidence that validates the concept of using the thread pitch and torque difference to determine self-loosening, even for a lock nut with reuse. The preload determined with the thread pitch and torque difference compares well with the measured preload after retightening with a maximum percent difference of 11.3%.FIG.5plots the measured preload after retightening against the tightening torque minus the prevailing torque. The spread in measured preload was 312 lb.FIG.6plots the corresponding measured and computed preload against the tightening torque minus removal torque. The difference between computed preload and measured preload after retightening inFIG.6is much less than the spread in measured preload with tightening torque minus prevailing torque inFIG.5. This is still further evidence of the validity of the disclosed concept of calculating preload using thread pitch and torque difference, even for a lock nut with a prevailing torque locking feature and reuse. Bolt with Locking Insert Test Data Table 4 presents sample data from tests with an AN4-14A bolt and a 3591-4CN375 locking Helicoil insert. In these tests, the torque wrench was applied to the bolt head. As before, the lower range torque wrench was used to measure the prevailing torque in both the “on” and “off” directions under the conditions of a fully engaged locking feature and zero preload. The higher range torque wrench was used to apply a tightening and retightening torque of 60 in-lb plus the measured “on” prevailing torque and then was used to measure the removal torque. The preload after retightening was measured with the load cell. The process of tightening, removing the tightening torque, and retightening was performed 11 times with the same bolt and locking insert. Each row in Table 4 presents the torque measurements, the preload measurement after retightening, and the calculations of the preload, percent difference, self-loosening, and primary locking for the process. The percent difference shown in the table is the difference between the measured preload after retightening and the computed preload from torque difference. TABLE 4Tests and calculations for AN4C14A bolt with 3591-4CN375 Helicoil insert.MeasurementsCalculations(Tt+ Tr)/Tpv onTtTrTpv offTpv offTtFpπ(Tt− Tr)/p%Fpp/2π(Tt− Tr)/22 − Tpv(in-lb)(in-lb)(in-lb)(in-lb)(in-lb)(in-lb)(lb)(lb)diff(in-lb)(in-lb)(in-lb)15755815157513321495−12.37.68.551.51575591111711240105614.97.06.054.01171578868113196814.46.45.554.586854886811251232−9.56.47.053.086854666610021056−5.35.76.054.06665566669919682.45.65.554.56665566669979682.95.75.554.5666556666959968−0.95.55.554.56665566658858800.65.05.054.5665556665864880−1.84.95.054.5665556665851880−3.44.85.054.5 This data shows a significant decrease in prevailing torque with reuse. This is an expected result with locking inserts. As in the previous data, the preload drops with reuse due to an increase in friction at the thread interface and bolt head face as indicated by the increase in computed primary locking. The self-loosening computed with measured preload after retightening compares well with the self-loosening computed with torque difference. This is evidence of the validity of the concept of using one-half the tightening minus removal torque for determining self-loosening for bolts, even with locking inserts and reuse. The preload determined with thread pitch and torque difference also compares well with the measured preload after retightening with a maximum percent difference of 14.9%.FIG.7plots measured preload after retightening against tightening torque minus prevailing torque. The spread in measured preload was 481 lb.FIG.8plots the measured and the computed preload against the tightening torque minus the removal torque. The difference between computed preload and measured preload after retightening shown inFIG.8is much less than the spread in measured preload inFIG.7. This is evidence of the validity the disclosed concept of calculating preload with thread pitch and measured tightening minus removal torque, even for bolts with locking inserts and reuse. As an example of using this approach with fasteners with a prevailing torque locking feature, consider a 0.25-28 bolt and a locking insert. The bolt is installed into the insert with torque wrenches with a measured “on” prevailing torque of 15 in-lb and tightening torque of 75 in-lb. Torque wrenches are used to measure a removal torque of 58 in-lb and an “off” prevailing torque of 15 in-lb. Then, the bolt is retightened with a measured “on” prevailing torque of 15 in-lb and tightening torque of 75 in-lb. The preload is then calculated using Equation 5 Fp=πp(Tt-Tr)=π(1/28)(75-58)+1495lb(14) Since the data used in this example is from row 1 of Table 4, the measured preload after retightening is 1,332 lb. The percent difference between the measured and the computed preload is −12.3%. From Equation 8, the self-loosening computed from the measured torque difference compares well to the self-loosening computed from the measured preload after retightening 12(Tt-Tr)=12(75-58)=8.5in·lb(15)Fpp2π=(1332)(1/28)2π=7.6in·lb From Equation 10, the primary locking is computed as Fp(μtrtcosβ+μnrn)=12(Tt+Tr)-Tpv=12(75+58)-15=51.5in·lb(16) Similar data was obtained for the other tested fasteners. The examples presented above are representative and show reduced uncertainty in preload from torque difference versus just tightening torque alone. However, tests with the stainless steel AN4C14A bolts and AN363C428 all-metal locknuts resulted in severe galling and seizing during the initial use and were not tested further. As noted above, the fasteners were tested in their as-received condition with no cleaning performed or lubricant added. Since the addition of lubricant to the threads and nut face (or bolt face for use with inserts) improves repeatability and reduces uncertainty in preload in torque-tension data, it is expected that the addition of lubricant will further reduce the uncertainty in the computed preload from the torque difference. Practical Applications As described above, the disclosed approach provides a way to achieve a desired or “target” preload from nothing more than thread pitch, tightening torque, and removal torque. No additional information or previous data is needed. The tightening torque and removal torque can be measured with a torque tool, such as a common torque wrench, at the point of use. Consider another example. Given a torque wrench and threaded fasteners in the form of a nut (a first fastener element) and bolt (a second fastener element) both having a thread pitch of 1/28″, it is desired to achieve a target preload of 900 lb in a given application. The target preload could be achieved through knowledge of the thread pitch and by applying trial values of tightening torque, measuring the associated removal torque, and computing the preload using Equation 5. Once a trial tightening torque value is identified that approximates the target preload, that torque value can be used as the tightening torque that is ultimately applied to the nut in the given application. Table 5 illustrates actual data using this process. As shown in that table, an initial trial tightening torque of 20 in-lb was applied to a nut. This resulted in a removal torque of 16 in-lb and a computed preload of 352 lb. The trial tightening torque was then incrementally increased until a preload that approximates the target preload was achieved. The results in Table 5 indicate that a tightening torque of 52 in-lb should provide a preload of approximately 900 lb. The actual preload was then measured and was within a few percent of that value. TABLE 5Preload from only thread pitch, tighteningtorque, and removal torque.Tt(in-lb)Tr(in-lb)π (Tt− Tr)/p (lb)20163523024528403270350408796049967 As described above, however, the required tightening torque can be determined more quickly using the trial 20 in-lb tightening torque, the trial 352 lb preload achieved with the trial 20 in-lb tightening torque, and the target preload of 900 lb. A simple ratio of Tt/Tt trial=Fp/Fp trialyields Tt=TttrialFpFptrial=pTttrialFpiT(Tttrial-Trtrial)(17) When the thread pitch, the trial tightening torque of 20 in-lb, the trial removal torque of 16 in-lb, and the target preload of 900 lb are input into Equation 17, the required tightening torque can be computed as (0.0357)(20)(900)/π(20-16)=51 in-lb. If that tightening torque is applied (the “required tightening torque,” i.e., the tightening torque required to achieve the target preload), the target preload will be achieved within a few percent. Therefore, the entire process simply involves: (1) applying and measuring a trial tightening torque, (2) applying and measuring a removal torque, and (3) calculating the required tightening torque using Equation 17 based upon the thread pitch and the target preload, and (4) applying the required tightening torque to the threaded fastener achieve the target preload. This approach is viable even if the trial tightening torque is higher than the required tightening torque. For example, using the data in Table 5, if the trial tightening torque was 60 in-lb, the measured removal torque would be 49 in-lb and the computed trial preload would be 967 lb. Using Equation 17, the tightening torque required for the target preload is (60)(900)/967=56 in-lb, which achieves the target preload within a few percent. FIG.9illustrates an example embodiment of a torque tool10, in the form of a torque wrench, that can be used in situations such as those described above. As shown in the figure, the torque wrench10includes an elongated shaft12having a handle14at one end and a head16at the other end that is adapted to receive sockets that can be used to tighten and loosen nuts and bolts. By way of example, the sockets are configured to tighten and loosen “hex” nuts and bolts having a hexagonal outer peripheries or heads. Mounted to the shaft12is a computing device18that includes a display20and one or more buttons22configured for user interfacing with the device. Torque measurements made with the torque wrench10can be displayed in the display and stored in memory (seeFIG.10) for use in calculating the required tightening torque. The buttons22can be used to make selections and input information, such as thread pitches and target preloads. FIG.10shows an example embodiment for the computing device18of the torque wrench10ofFIG.9. As shown in this figure, the device18includes a processor24and memory26(a non-transitory computer-readable medium). It is noted that, while these two components are shown as independent components, in some embodiments the processor24and memory26can be integrated into a single element, such as a microprocessor or microcontroller chip (seeFIG.11). Irrespective of the particular configuration that is used, the memory26stores a rudimentary operating system28, one or more preload algorithms30, and data storage32. The preload algorithms28, which comprise computer-executable instructions, can be executed by the processor24to apply one or more of the equations described above (such as Equation 5 and/or Equation 17) to estimate various parameters, such as preload and required tightening torque. As a use example, a user can input the pitch of a bolt on which a nut is to be tightened as well as a target preload using the buttons22(alternatively, the display20can be a touch display with “soft” buttons). The torque wrench10can then be used to apply both a trial tightening torque and a trial removal torque to the nut threaded on the bolt. As this is performed, the trial tightening torque and the trial removal torque are measured and stored in memory26. The computing device18can then calculate the tightening torque required to achieve the target preload and display it for the user in the display20. The user can then tighten the nut to the required tightening torque computed by the torque wrench10. Variations and Technique The disclosed approach is a significant improvement over the existing method of using only tightening torque, requires no previously acquired torque-tension data, and calculates preload and/or required tightening torque with reduced uncertainty. Additional expensive tools, training, and preparation are not needed as with other existing methods based on turn-angle or bolt stretch. Other than thread pitch, no previous knowledge, data, or preparation is required. Significantly, because the approach is based on trial tightening toque minus trial removal torque measurements at the point of installation, it inherently accommodates variations in thread interface and nut or bolt face conditions, including the state of coatings, cleanliness, and lubricant. Similarly, the disclosed approach accommodates variations in technique with the torque wrench. For example, some technicians prefer to apply torque slowly and hold at the tightening torque, while others promote a more rapid application of torque. Such differences can make a significant difference if one is only using tightening torque based on previously obtained torque-tension data. The technician or user at installation needs to be aware of which technique was used in obtaining the torque-tension data. With the disclosed approach, the application of torque should be as consistent as possible for trial tightening torque and trial removal torque measurements. A slow, steady application of torque for tightening with a two-second hold at tightening torque is recommended and was used in the above-described testing. Too rapid application of tightening torque may result in lower friction as compared to the friction associated with the removal torque. Any difference in friction between tightening and removal likely accounts for most of the uncertainty in using this new approach to achieve preload. Automated Embodiments: In some embodiments, an automated torque tool can be configured to automatically apply and measure the trial tightening torque, the trial removal torque, and the required tightening torque that achieves the target preload.FIG.11schematically illustrates an example configuration for such an automated torque tool40. As shown in this figure, the tool40can include (among other things) a microcontroller42that functions similarly to the processor24and memory26, a user interface44the functions similarly to that of the torque wrench, as well as a motor48and a motor driver46that is used to control the motor under control of the microcontroller42. In operation, the motor48can be used to drive (apply torque to) a threaded fastener instead of a human user. Advantages of an automated torque tool include an even further reduction of uncertainty in the target preload by removing user technique in torque application and increased simplicity of use. | 40,441 |
11858097 | The reference numerals in the figures are:1. First Pliers Body,2. Second Pliers Body,3. Clamping Rim,4. First Bottom Plate,5. Extrusion Notch,6. Bolt Assembly,7. Second Bottom Plate,8. Through Hole,9. Rotating Notch,10. Center Post. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The invention will be described in detail with reference to the drawings. As shown inFIGS.1,2and3, an adjusting-free clamp pliers jaw includes a first pliers body1, a second pliers body2and a base, in which both the first1and second2pliers bodies rotate on the base and the first1and second2pliers bodies are mirror symmetrical, and it further includes a clamping rim3and an extrusion notch5, in which the clamping rim3is provided on both the first1and second2pliers bodies, the extrusion notch5is provided on both the first1and second2pliers bodies, and the clamping rims3on the first1and second2pliers bodies are jointly used for clamping a clamp, and the extrusion notches5on the first1and second2pliers bodies are used for jointly pinching off the clamp. In this device, the clamping rims3and extrusion notches5are respectively arranged on the first1and second2pliers bodies, a clamping of the clamp can be made by the clamping rims3on the first1and second2pliers bodies (which is equivalent to a “crimping” of the clamp described in patent CN2017219168145), an extruding of the clamped clamps can be made by the extrusion notches5on the first1and second2pliers bodies, and the clamped clamps can be pinched off by an extrusion action of the extrusion notches5(this process is equivalent to the shearing of the clamp described in patent CN2017219168145). Specifically, the device only provides a pliers jaw, which can only be used after other structures such as pliers handles are installed. Specifically, during an installation for this device, two holes at a bottom (which is referred to a position relationship inFIG.1) of the first1and second2pliers bodies are connected with pliers handles, and an installation structure between the two pliers handles is the same as that shown in patent CN2017219168145. The extrusion notches are formed by the first and second pliers being recessed inwards respectively, an inner wall of the extrusion notch formed by the first plier near the clamping rims and an inner wall of the extrusion notch formed by the second plier near the clamping rims cooperates to form a shearing part11to pinch off or shear the clamp. An accommodation space12is set between the shearing part and the clamping rims. In this scheme, the clamping rims3and the extrusion notches5are arranged on the first1and second2pliers bodies, so that a clamping and breaking of the clamp from the extrusion can are realized and both the crimping and shearing functions can be integrated together without a conversion operation, and an operation of this device is simple, convenient and fast. As shown inFIGS.1,2and3, the extrusion notch5is an arc-shaped extrusion notch5. As shown inFIGS.1,2and3, the base comprises a first bottom plate4, a second bottom plate7and a bolt assembly6, wherein the first bottom plate4and the second bottom plate7are connected with each other by the bolt assembly6, the first bottom plate4and the second bottom plate7are parallel to each other between which the first1and second2pliers bodies are positioned, and the first1and second2pliers bodies are respectively rotatably arranged on screws of the bolt assembly6through their own through holes8. As shown inFIGS.1,2and3, it also comprises a rotating notch9and a center post10, wherein the center post10is arranged between the first bottom plate4and the second bottom plate7and perpendicular to the first bottom plate4, the rotation notch9is arranged on both the first1and second2pliers bodies, and the rotation notch9of the first pliers body1and the rotation notch9of the second pliers body2abut against the central post10. The above design is made to ensure the first1and second2pliers bodies to be more stable during a rotation. As shown inFIGS.1,2and3, the first bottom plate4abuts against the extrusion notches5on the first1and second2pliers bodies. In the above configuration, it can be ensured that the clamp abuts against the first bottom plate4when the clamp is extruded by the extrusion notches5, and the first bottom plate4will play a certain role in blocking the clamp, thus ensuring that the clamp cannot be extruded away during pinching off. As shown inFIGS.1,2and3, the first bottom plate4abuts against the clamping rims3on the first1and second2pliers bodies. By abutting the first bottom plate4against the clamping rims3on the first1and second2pliers bodies, it can be ensured that the clamp doesn't shake during a clamping of the rims3and can keep the clamp stable during the clamping. The above description is only the preferred embodiments of the present invention and does not limit the patent scope of the present invention, any equivalent structure process modification used according to the contents of the specification and accompanying drawings in the present invention, no matter whether it is directly or indirectly used in any other related technical field, should be included within the protection scope of the present invention. | 5,245 |
11858098 | Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of embodiment and the arrangement of components set forth in the following description or illustrated in the following drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. DETAILED DESCRIPTION FIG.1illustrates a fastener driver, such as a stapler10, for driving fasteners into a workpiece. The stapler10includes a housing14and a handle or lever18pivotably coupled to the housing14. The housing14contains a drive mechanism (not shown) operable to force a fastener22(FIG.3A), such as a staple22aor a nail22b, through an ejection opening24formed in the housing14and into the workpiece. The lever18is pivotable to actuate the drive mechanism to perform the fastening operation. The housing14includes a front side26, a rear side30opposite the front side26, a bottom side34, a top side38opposite the bottom side34, a first lateral side42, and a second lateral side46opposite the first lateral side42. In the illustrated embodiment, the lever18is coupled to the top side38of the housing14at a location near the front side26. The ejection opening24is defined in the bottom side34of the housing14at a location near the front side26. Other embodiments include different locations for the lever18and the ejection opening24. With continued reference toFIG.1, the stapler10includes a removal tool or fastener puller50coupled to the housing14and selectively removable from the housing14for removing fasteners from a workpiece. The fastener puller50includes a removal head54and a pair of branching legs58affixed to the removal head54and extending away therefrom. The legs58extend in a generally parallel manner and terminate at their distal ends as a pair of hooks62. The hooks62face away from one another. The removal head54is generally flat in shape and includes a forked tip portion66that defines a V-shaped nail slot70. The nail slot70is shaped to receive a head of a nail22b. The removal head54has a lateral width74sized to fit between the legs of a staple22aas the removal head54is slid underneath the top member of the staple22a.The removal head54is generally bent away from the legs58(in a direction generally out of plane as viewed inFIGS.2A,2B) such that a pivot region78is defined between the removal head54and the legs58. In operation, the fastener puller50is placed with the pivot region78abutted against the workpiece, and the tip portion66is engaged with an embedded fastener (nail or staple). The legs58are then pressed toward the workpiece, causing the fastener puller50to pivot about the pivot region78and the tip portion66to pry the fastener away from the workpiece. According to one or more embodiments, the legs58form a base portion of the fastener puller50. As shown inFIGS.1-2B, the housing14defines a fastener puller receptacle82that selectively receives the hooks62of the fastener puller50to couple the fastener puller50to the stapler10. In the illustrated embodiment, the fastener puller receptacle82is formed as a recess in the first lateral side42of the housing14, although in other embodiments the recess may be located elsewhere along the stapler10. To couple the fastener puller50to the housing14, the legs58are pressed (squeezed) laterally toward one another (in a direction of the arrows inFIG.2B) into a squeezed position, and then the hooks62are inserted into the fastener puller receptacle82. The legs58are then released to assume their natural shape (released position), causing the hooks62to move away from one another and engage the edges of the housing14to thereby secure the fastener puller50to the stapler10. To uncouple the fastener puller50from the stapler10, the legs58are again pressed laterally toward one another, causing the hooks62to disengage from the housing14, and then the hooks62are removed from the fastener puller receptacle82. In addition to being operable to remove fasteners, the fastener puller50is also operable as a belt clip for clipping the stapler10to the belt or other clothing or accessory of a user. Specifically, when the fastener puller50is secured to the fastener puller receptacle82, the removal head54may be slid over the belt of the user to hang the stapler10from the belt. FIGS.3A and3Billustrate a fastener puller50aaccording to another embodiment. The fastener puller50aincludes a removal head54awith a closed tip portion66afor sliding underneath a staple22a. The fastener puller50aalso includes a cross member86aformed in a pivot region78a, with a nail slot70adefined in the cross member86a. An aperture or window94ais formed in the removal head54abetween the closed tip portion66aand the cross member86a. The removal head54a, including the tip portion66aand the cross member86a, is generally flat in shape. The fastener puller50aalso includes a pair of legs58aaffixed to the removal head54aand extending away therefrom generally parallel to one another. The legs58aterminate at their distal ends as a pair of oppositely-facing hooks62a. Unlike the removal head54a, the legs58aare not flat in shape but instead are formed as elongated bars having a round (e.g., circular) cross-sectional shape. The fastener puller50amay be coupled to the stapler10(or to a user's belt) in a manner identical that described above for the fastener puller50. According to one or more embodiments, the legs58aform a base portion of the fastener puller50a. To remove a nail22bfrom the workpiece, the head of the nail22bis fitted into the nail slot70aand the legs58aare pulled away from the workpiece (in a direction of the arrow90a). To remove a staple22afrom the workpiece, the tip portion66ais slid beneath the staple22aand then the legs58aare pressed toward the workpiece (in a direction of the arrow90b). FIGS.4A and4Billustrate a fastener puller50baccording to another embodiment. The fastener puller50bincludes a removal head54bwith a closed tip portion66bfor sliding underneath a staple22a. The fastener puller50balso includes a cross member86bformed in a pivot region78b. An aperture or window94bis formed in the removal head54bbetween the closed tip portion66band the cross member86b. A nail slot70bis defined in an inside edge98bof the window94badjacent the tip portion66b. The removal head54b, including the tip portion66band the cross member86b, is generally flat in shape. The fastener puller50balso includes a pair of legs58baffixed to the removal head54band extending away therefrom generally parallel to one another. The legs58bterminate at their distal ends as a pair of oppositely-facing hooks62b. Unlike the removal head54b, the legs58bare not flat in shape but instead are formed as elongated bars having a round (e.g., circular) cross-sectional shape. The fastener puller50bmay be coupled to the stapler10(or to a user's belt) in a manner identical that described above for the fastener pullers50and50a. According to one or more embodiments, the legs58bform a base portion of the fastener puller50b. To remove a nail22bfrom the workpiece, the head of the nail22bis fitted into the nail slot70band the legs58bare pressed toward the workpiece (in a direction of the arrow90b). To remove a staple22afrom the workpiece, the tip portion66bis slid beneath the staple22aand then the legs58bare pressed toward the workpiece (in a direction of the arrow90b). FIGS.4C and4Dillustrate a fastener puller50caccording to another embodiment. The fastener puller50cincludes a removal head54cwith a closed tip portion66cfor sliding underneath a staple22a. The fastener puller50calso includes a cross member86cformed in a pivot region78c. An aperture or window94cis formed in the removal head54cbetween the closed tip portion66cand the cross member86c. A nail slot70cis defined in the cross member86cadjacent the window94c. The removal head54c, including the tip portion66cand the cross member86c, is generally flat in shape. The fastener puller50calso includes a pair of legs58caffixed to the removal head54cand extending away therefrom generally parallel to one another. The legs58cterminate at their distal ends as a pair of oppositely-facing hooks62c. Like the removal head54c, the legs58care also generally flat in shape. The fastener puller50cmay be formed from a metal plate or sheet by a stamping process. In the illustrated embodiment, the removal head54cundergoes a hardening process to provide it with increased strength for pulling the staple22aor the nail22b. The legs58cdo not undergo the hardening process, and as such, they retain flexibility and can be squeezed toward one another to couple the hooks62cto the housing14of the stapler10. Each leg58calso includes an elongated tab59cextending along its outer side and bent downward, i.e., away from a primary portion of the leg58c. The tabs59cadd strength to the legs58cto resist bending when a prying force is applied to the legs58cin the direction of the arrow90c. According to one or more embodiments, the legs58cform a base portion of the fastener puller50c. The fastener puller50cmay also be coupled to a user's belt in a manner identical to that described above for the fastener pullers50,50a, and50b. Furthermore, according to one or more embodiments, instead of the legs58,58a,58b,58cwith the hooks62,62a,62b,62cfor engaging with the housing14of the stapler10, the fastener puller50,50a,50b,50cmay instead have a base portion thereof having an opening into which a screw may be inserted to screw onto a corresponding screw hole in the housing14of the stapler10. For example, the screw may be a butterfly screw. To remove a nail22bfrom the workpiece, the head of the nail22bis fitted into the nail slot70cand the legs58care pulled away from the workpiece (in a direction of the arrow90c). To remove a staple22afrom the workpiece, the tip portion66cis slid beneath the staple22aand then the legs58care pressed toward the workpiece (in a direction of the arrow90c). FIGS.5-6Billustrate a stapler210according to another embodiment. The stapler210may be operable with one or more of the fastener pullers50,50a,50b, and50cdescribed above. Or, the fastener pullers50,50a,50b, and/or50cmay be omitted from the stapler210. The stapler210is generally similar to the stapler10described above, and additionally includes an integrated or fixed fastener puller250that is fixedly secured to the housing14. The fixed fastener puller250is affixed to a corner102of the housing14that connects the front side26with the top side38. In other embodiments, (not shown), the fixed fastener puller250can be located in other areas of the housing14(e.g., the front side26, the rear side30, the top side38, a corner connecting the rear side30and the top side38, etc.). The fixed fastener puller250is generally flat and L-shaped and includes a tip portion66spaced apart from the corner102to form a gap therebetween. The tip portion66is forked and defines a V-shaped nail slot70shaped to receive a head of a nail22b. The tip portion66has a lateral width sized to fit between the legs of a staple22aas the tip portion66is slid underneath the top member of the staple22a.In operation, the tip portion66is engaged with a fastener, and then the entire stapler210is rotated to pry the fastener away from a workpiece via the tip portion66as shown inFIG.6B. FIGS.7-10Cillustrate a stapler410according to another embodiment. The stapler410may be operable with one or more of the fastener pullers50,50a,50b,50cdescribed above. Or, the fastener pullers50,50a,50b, and/or50cmay be omitted from the stapler410. The stapler410is generally similar to the stapler10described above, and additionally includes a retractable fastener puller450that is slidably coupled to the housing14. The retractable fastener puller450is provided at the corner102of the housing14that connects the front side26with the top side38. In other embodiments, (not shown), the retractable fastener puller450can be located in other areas of the housing14(e.g., the front side26, the rear side30, the top side38, the corner connecting the rear side30and the top side38, etc.). The retractable fastener puller450is generally flat and L-shaped and includes a tip portion66that is forked and defines a V-shaped nail slot70shaped to receive a head of a nail22b. The tip portion66has a lateral width sized to fit between the legs of a staple22aas the tip portion66is slid underneath the top member of the staple22a. The retractable fastener puller450is translatable or slidable between a retracted position (FIG.10A) embedded flush with the surrounding surface of housing14and an extended position (FIG.10C) protruding from the surrounding surface of the housing14so as to engage and pry a fastener22. In the illustrated embodiment, the stapler410further includes an outwardly-biased lock member106that secures the retractable fastener puller450in each of the retracted and extended positions. The lock member106is biased outward by a biasing member or spring110supported within the housing14. To move the retractable fastener puller450to the extended position, the user may press a tab114of the retractable fastener puller450to slide the fastener puller450outward in a direction away from the housing14. Upon moving the retractable fastener puller450to the extended position, the spring110forces the lock member106outward, causing the lock member106to abut the tab114and thereby hold the fastener puller450in the extended position. In the extended position, the retractable fastener puller450can be engaged with an embedded fastener22and the stapler410can be rotated to pry the fastener22away from the workpiece. To move the retractable fastener puller450to the retracted position, the user presses the lock member106inward (i.e., toward the housing), and then slides the retractable fastener puller450back toward the housing14to the retracted position. In the retracted position, the spring110presses the lock member106against the retractable fastener puller450to hold the fastener puller450in the retracted position. FIGS.11-15Cillustrate a stapler610according to another embodiment. The stapler610may be operable with one or more of the fastener pullers50,50a,50b,50cdescribed above. Or, the fastener pullers50,50a,50b, and/or50cmay be omitted from the stapler610. The stapler610is generally similar to the stapler10described above, and additionally includes a pivotable fastener puller650that is rotatably coupled to the housing14. The pivotable fastener puller650is provided at the corner102of the housing14that connects the front side26with the top side38. In other embodiments, (not shown), the pivotable fastener puller650can be located in other areas of the housing14(e.g., the front side26, the rear side30, the top side38, the corner connecting the rear side30and the top side38, etc.). The pivotable fastener puller650is generally flat and L-shaped and includes a tip portion66spaced apart from the corner102to form a gap therebetween. The tip portion66is forked and defines a V-shaped nail slot70shaped to receive a head of a nail22b. The tip portion66has a lateral width sized to fit between the legs of a staple22aas the tip portion66is slid underneath the top member of the staple22a. The pivotable fastener puller650is movable between a retracted position (FIG.15A) with the tip portion66adjacent the first lateral side42of housing14and an extended position (FIG.15C) with the tip portion66adjacent the front side26of housing14so as to engage and pry a fastener22. A pin118rotatably secures the pivotable fastener puller650to the housing14and defines a pivot axis. The pin118is spring-biased toward the housing14(e.g., along the axis) so that the pin118pulls the pivotable fastener puller650inward toward the housing14. A pair of locating tabs122are provided adjacent the pin118for securing the pivotable fastener puller650in the retracted or extended positions, respectively. Each locating tab122is received into a locating recess126formed in the pivotable fastener puller650when the fastener puller650is located in the respective retracted or extended position. In operation, the fastener puller650is moved from the retracted position to the extended position by first pulling the fastener puller650away from the housing14(e.g., along the axis) to disengage the locating tab122from the locating recess126. Then, the fastener puller650is rotated about the axis of the pin118to the extended position and released, such that the spring-biased pin118pulls the fastener puller650back toward the housing and the locating tab122is received into the locating recess126to secure the fastener puller650in the extended position. In the extended position, the pivotable fastener puller650can be engaged with an embedded fastener22and the stapler610can be rotated to pry the fastener22away from the workpiece. The pivotable fastener puller650can be replaced to the retracted position in a manner similar to that described above. Various features of the disclosure are set forth in the following claims. | 17,249 |
11858099 | DETAILED DESCRIPTION Referring toFIGS.1-6, a clinch staple mechanism10according to the exemplary embodiments of the disclosure is a piece of equipment for attaching to pneumatic stapling tools11. Clinch staple mechanism10includes a pivoting base12for the pneumatic stapling tool11to attach to, a clinch arm14that holds a clinch block18on a distal end31of clinch arm14. Pneumatic stapling tool11is pivotally connected to pivoting base12at pivoting base pivot point21. Clinch arm14is pivotally connected at a proximal end33thereof to pivoting base12at clinch arm pivot point13. Clinch block18may be supported on clinch arm14by a clinch block base16. Clinch block18has a pair of staple leg tracks20in the form of parallel grooves for the guiding of staples in a particular direction relevant to the orientation of the wood grain of the wood being clinched. Exemplary embodiments of this disclosure allow for the clinching of two pieces of wood such as a top deck board32and a stringer board34whose wood grain orientations are opposite or perpendicular to one another. As shown by example inFIG.6, a pallet30may include top deck boards32and stringer boards34. Top deck boards32may have a wood grain direction A which is perpendicular or opposite to a wood grain direction B of stringer boards34. The clinch staple mechanism10is used by attaching a pneumatic stapling tool11to pivoting base12and to an air supply attachment22configured for receiving air from the pneumatic stapling tool11(FIGS.2A and2B). Once the pneumatic stapling tool11is attached to pivoting base12, it is usable, for example, for clinching together wooden deck boards32and stringer boards34(FIG.6). The user inserts clinch arm14so that clinch block18is positioned beneath a top deck board32and touching near the underside of a stringer board34of a pallet30and pneumatic tool11is positioned above the top deck board32. Then the user presses downward on pneumatic stapling tool11and pulls the trigger15on the pneumatic stapling tool11. This sends air to air supply attachment22, which then activates a pivot actuating air cylinder24, which causes clinch arm14to clinch upwards, thereby pressing top deck board32together with stringer board34at the same time a staple26having a crown29and a pair of staple legs28is dispensed or fired down from the pneumatic stapling tool and through top deck board32and stringer board34. As the staple26goes through top deck board32and stringer board34, staple legs28of staple26pass through top deck board32and stringer board34and enter staple leg tracks20(FIG.4) on clinch block18. Clinch block18has a curved substantially U-shaped profile19configured to face the underside of the wood pieces being clinched such as the underside of stringer board34. Staple leg tracks20follow curved profile19so that each staple leg track20has a bottom portion23, a forward curved portion25and a rearward curved portion27. After passing through top deck board32and stringer board34, staple legs28will reach the bottom portion23of staple leg tracks20. After the staple legs28reach bottom portion23, they are diverted and bent in the direction of staple leg tracks20following either forward curved portion25or rearward curved portion27of staple tracks20. For example, as staple legs28bend, they bend either forward (away from proximal end33of clinch arm14) or backward (towards proximal end33of clinch arm14) following forward curved portion25or rearward curved portion27of staple tracks20until they curve upwardly and re-enter stringer boards34and top deck boards32from underneath such that staple legs28are not parallel with crown29. This forward or backward clinching direction adds far greater clinching strength to the deck boards, as it clinches in the direction of wood grain of the wood on both the top deck board and the stringer board underneath. Releasing trigger15of pneumatic stapling tool11allows for clinch arm14to release the wood so the next clinch can be shot on. Exemplary embodiments disclosed herein allow for the consistent control of the clinch direction, thereby allowing for stronger pullout tension and longer product life of pallet30than typical clinch staple mechanisms. According to exemplary embodiments and referring toFIGS.5and6, crown29of staple26is disposed in crown direction C and perpendicularly traverses the direction A of the wood grain of top deck board32and extends in the direction of B of the wood grain of stringer board34underneath, and also staple legs28are bent in direction D perpendicular to crown direction C and perpendicularly traverse direction B of the wood grain of stringer board34underneath and are bent in the direction A of the wood grain of top deck board32. This clinching format of staple26creates a clinch that does not pull back through the wood grain of the wood, but instead it captures both opposing directions of the wood grain in each board32,34. Features of the disclosed embodiments may be combined, rearranged, omitted, etc., within the scope of the invention to produce additional embodiments. Furthermore, certain features may sometimes be used to advantage without a corresponding use of other features. Many alternatives, modifications, and variations are enabled by the present disclosure. While specific embodiments have been shown and described in detail to illustrate the application of the principles of the invention, it will be understood that the exemplary embodiments may be embodied otherwise without departing from such principles. Accordingly, Applicants intend to embrace all such alternatives, modifications, equivalents, and variations that are within the spirit and scope of the exemplary embodiments. | 5,680 |
11858100 | Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. DETAILED DESCRIPTION FIGS.1and2illustrate an impact power tool, such as rotary hammer10, according to an embodiment of the invention. The rotary hammer10includes a housing14having a D-shaped handle16, a motor18disposed within the housing14, and a rotatable spindle22coupled to the motor18for receiving torque from the motor18. In the illustrated embodiment, the rotary hammer10includes a quick-release mechanism24coupled for co-rotation with the spindle22to facilitate quick removal and replacement of different tool bits. A tool bit25may include a necked section or a groove in which a detent member of the quick-release mechanism24is received to constrain axial movement of the tool bit25to the length of the necked section or groove. The rotary hammer10defines a tool bit reciprocation axis26, which in the illustrated embodiment is coaxial with a rotational axis28of the spindle22. The motor18is configured as a brushless direct current (BLDC) motor that receives power from an on-board power source (e.g., a battery pack, not shown). The battery pack may include any of a number of different nominal voltages (e.g., 12V, 18V, etc.), and may be configured having any of a number of different chemistries (e.g., lithium-ion, nickel-cadmium, etc.). In some embodiments, the battery pack removably coupled to the housing14. Alternatively, the motor18may be powered by a remote power source (e.g., a household electrical outlet) through a power cord. The motor18is selectively activated by depressing an actuating member, such as a trigger32, which in turn actuates an electrical switch for activating the motor18. With reference toFIG.2, the rotary hammer10further includes a reciprocating impact mechanism30having a reciprocating piston34disposed within the spindle22, a striker38that is selectively reciprocable within the spindle22in response to a variable pressure air spring developed within the spindle22by reciprocation of the piston34, and an anvil42that is impacted by the striker38when the striker38reciprocates toward the tool bit25. The impact is then transferred from the anvil42to the tool bit25. Torque from the motor18is transferred to the spindle22by a transmission46. With reference toFIGS.3and4, the transmission46includes an input gear50having a bevel gear51and a first intermediate gear53disposed coaxially with the bevel gear51for co-rotation therewith. In some embodiments, the bevel gear51and the first intermediate gear53may be integral. The bevel gear51is engaged with a beveled pinion54on an output shaft56driven by the motor18, which defines a motor axis58(FIG.2). The motor axis58extends in the same direction as and is offset from the reciprocation axis26and the rotational axis28of the spindle22. As such, motor axis58is parallel with the reciprocation axis26and the rotational axis28of the spindle22. The first intermediate gear53is meshed with a second intermediate gear60on an intermediate shaft62that is supported by a gearcase64(FIGS.2and3). The intermediate shaft62supports an intermediate pinion66that engages an output gear68coupled for co-rotation with the spindle22. The output gear68is secured to the spindle22using a spline-fit or a key and keyway arrangement, for example, that facilitates axial movement of the spindle22relative to the output gear68yet prevents relative rotation between the spindle22and the output gear68. In some embodiments, the transmission46may include a clutch that may limit the amount of torque transferred from the motor18to the spindle22. In further embodiments, the clutch may disengage the transmission46from transferring rotation from the motor18to the spindle22. With reference back toFIGS.1and2, the rotary hammer10includes a mode selection member74rotatable by an operator to switch between three modes. In a “hammer-drill” mode, the motor18is drivably coupled to the piston34for reciprocating the piston34while the spindle22rotates. In a “drill-only” mode, the piston34is decoupled from the motor18but the spindle22is rotated by the motor18. In a “hammer-only” mode, the motor18is drivably coupled to the piston34for reciprocating the piston34but the spindle22does not rotate. As shown inFIGS.3and4, the impact mechanism30includes a crankshaft78that is rotatably supported within the gearcase64for co-rotation with the bevel gear51and the first intermediate gear53. In other words, the bevel gear51is concentric with the crankshaft78. The crankshaft78defines a crank axis82(FIG.2) that is parallel with a rotational axis86of the intermediate shaft62and intermediate pinion66. The crank axis82and the rotational axis86of the intermediate shaft62are perpendicular to the motor axis58and both the reciprocating axis and the rotational axis26,28of the spindle22. A bearing90(e.g., a roller bearing, a bushing, etc.) is supported by the gearcase64and rotatably supports the crankshaft78. The crankshaft78includes a hub94with an eccentric pin98. In the illustrated embodiment, the hub94and the eccentric pin98are integrally formed with the crankshaft78. The impact mechanism30further includes a connecting rod102(FIG.3) interconnecting the piston34and the eccentric pin98. In some embodiments, the impact power tool10may not include the transmission46to transfer rotation from the motor18to the spindle22. In such an embodiment, the impact mechanism30would only be operable to impart an axial impact to a tool bit. For example, the impact power tool10tool may be a breaker that imparts axial impacts to a large tool bit to break up concrete and other similar workpieces. ReferencingFIGS.2and3, because the motor18and the spindle22are parallel, the housing14is configured with an elongated shape. As such, a majority of the mass of the rotary hammer10is located between the motor axis58and the axes26,28of the spindle22. This results in a center of gravity of the rotary hammer10(schematically represented as “CG” inFIG.4) being positioned between the motor axis58and the axes26,28of the spindle22. In some embodiments of the rotary hammer10, the center of gravity is between 4 mm and 5 mm above the motor axis58from the frame of reference ofFIG.4. Having the center of gravity of the rotary hammer10between the motor axis58and the axes26,28of the spindle22locates the force applied by the user on the handle16, when drilling in an upward direction, generally inline with the center of gravity. Therefore, the moment exerted on the user by the rotary hammer10when drilling in an upward direction is decreased, reducing user fatigue when holding the rotary hammer10for drilling in an upward direction. In addition, the elongated housing14reduces the distance a user must reach in order to perform a drilling operation. Further, providing the impact mechanism30with a crankshaft78to convert rotary motion from the motor18to reciprocating motion of the piston34, advantageously reduces the amount of vibration caused by the impact mechanism30compared to typical rotary hammers that include a wobble assembly. Various features and advantages are set forth in the following claims. | 7,499 |
11858101 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Hereinafter, representative embodiments of some embodiments of a driver will be described with reference to the drawings. The same configurations are denoted by the same reference characters throughout the drawings, and the repetitive description thereof will be omitted. First Embodiment A driver10shown inFIG.1includes a housing11, a striking mechanism12, a pressure chamber13, a power conversion mechanism14, and an electric motor15. The striking mechanism12is disposed from the inside to the outside of the housing11. The pressure chamber13moves the striking mechanism12from the top dead center to the bottom dead center in a first direction B1. The power conversion mechanism14moves the striking mechanism12in a second direction B2opposite to the first direction B1. The torque of the electric motor15is transmitted to the power conversion mechanism14. The housing11has a main body16, a cover17, a handle18, a motor case19, and a connecting portion20. The cover17closes an opening of the main body16. The handle18and the motor case19are connected to the main body16. The handle18and the motor case19are connected to the connecting portion20. A pressure accumulation container21and a cylinder22are provided in the housing11, and an annular connector23connects the pressure accumulation container21and the cylinder22. The pressure chamber13is formed in the pressure accumulation container21. The striking mechanism12includes a piston24and a driver blade25. The piston24is movable in the cylinder22in a direction of a center line A1of the cylinder22. The driver blade25is fixed to the piston24. The direction of the center line A1is parallel to the first direction B1and the second direction B2. As shown inFIG.2, a sealing member83is attached to an outer circumference of the piston24, and the sealing member83is in contact with an inner surface of the cylinder to form a sealing surface. The sealing member83air-tightly seals the pressure chamber13shown inFIG.1. A compressed gas is held in the pressure chamber13. Examples of the gas held in the pressure chamber13include inert gas such as nitrogen gas, noble gas or others in addition to air. In this embodiment, an example in which air is held in the pressure chamber13will be described. The driver blade25is made of metal. As shown inFIG.3,FIG.4, andFIG.5, the driver blade25has a plate-shaped main body portion25K and a plurality of convex portions25A to25H provided to the main body portion25K. The driver blade25is movable in the direction of the center line A1. The plurality of convex portions25A to25H are provided in a moving direction of the driver blade25. The plurality of convex portions25A to25H are arranged at constant intervals in the direction of the center line A1. In this embodiment, eight convex portions25A to25H are provided to the driver blade25. The convex portions25A to25H protrude from an edge26of the driver blade25. The direction in which the convex portions25A to25H protrude from the edge26is a direction intersecting with the center line A1. The convex portions25A to25H are sequentially arranged in the direction of the center line A1. The convex portion25A is arranged at the position where the distance from the piston24in the direction of the center line A1is smallest, and the convex portion25H is arranged at the position where the distance from the piston24is largest. Protrusion amounts H1from the edge26to respective tips of the convex portions25A to25H differ in each of the convex portions25A to25H. The protrusion amount H1of the convex portion25A having the smallest distance from the piston24in the direction of the center line A1is smallest, and the protrusion amounts H1of the convex portions25A to25H gradually increase as the distance from the piston24increases. A holder27is disposed from the inside to the outside of the main body16. The holder27is made of aluminum alloy or synthetic resin. The holder27has a cylindrical load receiving portion28, an arc-shaped cover29continuous to the load receiving portion28, and a nose portion30continuous to the load receiving portion28. The nose portion30has an injection path34. A part of the nose portion30is disposed outside the housing11. The load receiving portion28is disposed in the main body16, and the load receiving portion28has a shaft hole31. A bumper32is provided in the load receiving portion28. The bumper32is integrally formed of a rubber-like elastic material. The bumper32has a shaft hole33. The shaft holes31and33are connected, and the driver blade25is movable in the direction of the center line A1in the shaft holes31and33and the injection path34. As shown inFIG.1, the electric motor15is provided in the motor case19. The electric motor15includes a rotor15A and a stator15B, and the rotor15A is fixed to a motor shaft35. The motor shaft35is rotatably supported by a bearing36. The motor shaft35is rotatable about an axis line A2. A storage battery37detachably attached to the connecting portion20is provided, and the storage battery37supplies power to the electric motor15. The storage battery37includes a container case38and a battery cell contained in the container case38. The battery cell is a secondary battery that can be charged and discharged, and any of a lithium ion battery, a nickel hydride battery, a lithium ion polymer battery, and a nickel cadmium battery can be used as the battery cell. The storage battery37is a DC power source. A first terminal is provided in the container case38, and the first terminal is connected to the battery cell. When a second terminal is fixed to the connecting portion20and the storage battery37is attached to the connecting portion20, the first terminal and the second terminal are connected so as to allow a current to flow therebetween. As shown inFIG.2, a gear case39is provided in the housing11so as to be unable to rotate. A speed reducer40is provided in the gear case39. The speed reducer40includes an input member41, an output member42, and three pairs of planetary gear mechanisms. The input member41is fixed to the motor shaft35. The input member41is rotatably supported by a bearing43. The input member41and the output member42are rotatable about the axis line A2. A rotational force of the motor shaft35is transmitted to the output member42through the input member41. The speed reducer40reduces a rotation speed of the output member42with respect to the input member41. As shown inFIG.2, the power conversion mechanism14is disposed in the cover29. The power conversion mechanism14converts a rotational force of the output member42to a moving force of the striking mechanism12. The power conversion mechanism14includes a pin wheel shaft44integrally rotating with the output member42, a pin wheel45fixed to the pin wheel shaft44, and a plurality of pins45A to45H provided to the pin wheel45. The pin wheel45includes plates45J and45K. The plates45J and45K are disposed in parallel to each other at an interval in a direction of the axis line A2. The plurality of pins45A to45H are disposed between the plates45J and45K. The pin45A can be engaged with and released from the convex portion25A, the pin45B can be engaged with and released from the convex portion25B, and the pin45C can be engaged with and released from the convex portion25C. The pin45D can be engaged with and released from the convex portion25D, and the pin45E can be engaged with and released from the convex portion25E. The pin45F can be engaged with and released from the convex portion25F, and the pin45G can be engaged with and released from the convex portion25G. The pin45H can be engaged with and released from the convex portion25H. The pin wheel shaft44is rotatably supported by bearings46and47. The pin wheel shaft44is rotatable about the axis line A2. As shown inFIG.3toFIG.5, the axis line A2and the center line A1do not intersect with each other in plan view perpendicular to the axis line A2. As shown inFIG.3, a plurality of pins, that is, the eight pins45A to45H are arranged at intervals in a rotation direction of the pin wheel45. Radii R1from respective centers of the eight pins45A to45H to the axis line A2are different from each other in a radial direction of the pin wheel45. A first region85and a second region86disposed in different regions in the rotation direction are provided on an outer circumference of the pin wheel45. The first region85is provided in a range of about 270 degrees in the rotation direction of the pin wheel45, and the second region86is provided in a range of about 90 degrees in the rotation direction of the pin wheel45. The first region85has a constant radius R5. A radius R6of the second region86is not constant. The radius R5is larger than the radius R6. Namely, the second region86is formed by cutting a part of the pin wheel45in the rotation direction. The eight pins45A to45H are provided at positions corresponding to the first region85in the rotation direction of the pin wheel45. The radius R1from the center of the pin45A located at the front in the rotation direction of the pin wheel45among the eight pins45A to45H to the axis line A2is largest. The radii R1decrease as approaching to the pin45H located at the back in the rotation direction of the pin wheel45. In the embodiment shown inFIG.3toFIG.5, the radii R1from respective centers of the pins45A to45H to the axis line A2are all different from each other. When the pin wheel45is rotated, a moving range of the eight pins45A to45H about the axis line A2is present outside a moving range of the edge26of the driver blade25. A rotation restricting mechanism48is provided in the gear case39. The rotation restricting mechanism48is disposed in a power transmission path between the input member41and the output member42. The rotation restricting mechanism48is a rolling element, for example, a roller or a ball. The rotation restricting mechanism48is disposed between a rotational element of the planetary gear mechanism, for example, a carrier49and the gear case39. When the torque in the first direction is transmitted from the electric motor15to the carrier49, the rotation restricting mechanism48allows the pin wheel45to rotate in a counterclockwise direction inFIG.3by the torque. When the torque in the clockwise direction inFIG.3is applied from the driver blade25to the pin wheel45, so that the torque is transmitted to the carrier49and the torque in the second direction is applied, the rotation restricting mechanism48bites between the carrier49and the gear case39and prevents the pin wheel45from rotating in the clockwise direction inFIG.3. Also, as shown inFIG.1, the magazine50is supported by the nose portion30and the housing11. Nails51are contained in the magazine50. A plurality of nails51are coupled by a connecting element such as a wire or an adhesive. The magazine50includes a feed mechanism, and the feed mechanism supplies the nail51in the magazine50to the injection path34. A motor board52is provided in the motor case19, and an inverter circuit53shown inFIG.6is provided on the motor board52. The inverter circuit53includes a plurality of switching elements and each of the plurality of switching elements can be individually switched on and off. As shown inFIG.1, a control board54is provided in the housing11and a controller84shown inFIG.6is provided on the control board54. The controller84is a microcomputer including an input port, an output port, a central processing unit, and a memory device. As shown inFIG.1, a trigger55is provided to the handle18. The trigger55is movable with respect to the handle18. A trigger switch56is provided in the handle18, and the trigger switch56is turned on when an operation force is applied to the trigger55and is turned off when the operation force is released. As shown inFIG.2, a push lever57is attached to the nose portion30. The push lever57is movable in the direction of the center line A1with respect to the nose portion30. An elastic member58configured to bias the push lever57in the direction of the center line A1is provided. The elastic member58is a compression coil spring made of metal, and the elastic member58biases the push lever57in the direction away from the bumper32. A push lever stopper59is provided to the nose portion30, and the push lever57biased by the elastic member58is stopped while being in contact with the push lever stopper59. A push switch60shown inFIG.6is provided. The push switch60is turned on when the push lever57is pressed to a workpiece W1and is moved in the direction approaching to the bumper32by a predetermined amount. The push switch60is turned off when the force to press the push lever57to the workpiece W1is released. A phase detection sensor61configured to detect a rotation angle, that is, a phase of the pin wheel45is provided. A signal of the trigger switch56, a signal of the push switch60, and a signal of the phase detection sensor61are input to the controller84. A work example in which a worker uses the driver10and a control example performed by the controller84are as follows. The controller84determines whether the conditions to strike the nail51are satisfied or not. When the controller84detects at least one of the trigger switch56being turned off and the push switch60being turned off, the controller84determines that the conditions to strike the nail51are not satisfied and turns off all of the switching elements of the inverter circuit53. Therefore, the power of the storage battery37is not supplied to the electric motor15and the electric motor15is stopped. In addition, as shown inFIG.3, the pin45G and the convex portion25G are engaged with each other and the striking mechanism12is stopped at the standby position. When the striking mechanism12is at the standby position, the piston24is separated from the bumper32. When the striking mechanism12is stopped at the standby position, the tip of the driver blade25is located between a head of the nail51and the tip of the nose portion30in the direction of the center line A1. When the striking mechanism12is stopped at the standby position and the push lever57is separated from the workpiece W1as shown inFIG.1, the push lever57is stopped while being in contact with the push lever stopper59. Further, the controller84detects that the striking mechanism12is located at the standby position based on the signal output from the phase detection sensor61, and the controller84stops the electric motor15. The rotation restricting mechanism48makes the striking mechanism12stop at the standby position when the electric motor15is stopped. The striking mechanism12receives the biasing force of the pressure chamber13, and the biasing force received by the striking mechanism12is transmitted to the pin wheel shaft44through the pin wheel45. Therefore, the pin wheel shaft44receives the torque in the clockwise direction inFIG.3. The torque received by the pin wheel shaft44is transmitted to the carrier49, and the rotation restricting mechanism48bites between the carrier49and the gear case39. Therefore, the rotation of the pin wheel shaft44in the clockwise direction inFIG.3is prevented, and the striking mechanism12is stopped at the standby position inFIG.3. When the controller84detects that the trigger switch56is turned on and the push switch60is turned on, the controller84determines that the conditions to strike the nail51are satisfied and repeats the control to turn on and off the switching elements of the inverter circuit53, thereby supplying the power of the storage battery37to the electric motor15. Then, the motor shaft35of the electric motor15is rotated. The torque of the motor shaft35is transmitted to the pin wheel shaft44through the speed reducer40. The pin wheel45is rotated in the counterclockwise direction inFIG.3, the striking mechanism12is moved from the standby position in the second direction B2against the force of the pressure chamber13, and the air pressure in the pressure chamber13increases. The movement of the striking mechanism12in the second direction B2means that the striking mechanism12rises inFIG.1. Further, after the pin45H is engaged with the convex portion25H, the pin45G is released from the convex portion25G. When the striking mechanism12reaches the top dead center as shown inFIG.4, the tip of the driver blade25is located at the position higher than the head of the nail51. Also, after the striking mechanism12reaches the top dead center, the pin45H is released from the convex portion25H. Then, the striking mechanism12is moved in the first direction B1by the air pressure of the pressure chamber13. The movement of the striking mechanism12in the first direction B1means that the striking mechanism12falls inFIG.1. The driver blade25strikes the nail51in the injection path34, and the nail51is driven into the workpiece W1. In addition, when the whole of the nail51is driven into the workpiece W1and the nail51is stopped, the tip of the driver blade25is separated from the nail51by the reaction force. Further, the piston24collides with the bumper32as shown inFIG.5, and the kinetic energy of the striking mechanism12is absorbed by the elastic deformation of the bumper32. The position of the striking mechanism12when the piston24collides with the bumper32is the bottom dead center. Further, the motor shaft35of the electric motor15is rotated even after the driver blade25strikes the nail51. Then, when the pin45A is engaged with the convex portion25A, the striking mechanism12rises again inFIG.1. When the controller84detects that the striking mechanism12reaches the standby position ofFIG.3, the controller84stops the electric motor15. When the electric motor15is stopped, the rotation restricting mechanism48holds the striking mechanism12at the standby position. In this embodiment, from the state where the striking mechanism12is at the bottom dead center, the pin45A is engaged with the convex portion25A, the pin45B is engaged with the convex portion25B, the pin45C is engaged with the convex portion25C, the pin45D is engaged with the convex portion25D, the pin45E is engaged with the convex portion25E, the pin45F is engaged with the convex portion25F, the pin45G is engaged with the convex portion25G, and the pin45H is engaged with the convex portion25H, whereby the striking mechanism12reaches the top dead center. Note that, since two pairs of pins and convex portions are engaged, when the next pin and convex portion are engaged, the pin and convex portion engaged earlier are released. In this embodiment, the radii R1are sequentially shortened as the pins to transmit the torque of the pin wheel45to the striking mechanism12are changed by the rotation of the pin wheel45. Therefore, when the striking mechanism12rises by the torque of the pin wheel45, the radii R1corresponding to the arm of the moment are shortened as the striking mechanism12approaches to the top dead center. Accordingly, it is possible to suppress the increase in the load torque of the pin wheel45, that is, the load torque of the electric motor15as the striking mechanism12approaches to the top dead center. The load torque is a torque necessary for raising the striking mechanism12. In this embodiment, in order to suppress the increase in the load torque of the electric motor15, it is also possible to respectively set the radii R1from the respective centers of the pins45A to45H to the axis line A2in accordance with the increase amount of the load torque when the striking mechanism12is moved in the direction approaching to the top dead center. In this embodiment, the radii R1from the axis line A2to the respective centers of the pins45A to45H are made different from each other. The radius R5of the first region85of the pin wheel45is larger than the radius R6of the second region86. Also, the pin wheel45is preferably made of a metal material having a higher mass or a higher specific gravity compared with a resin or a carbon-based material. In particular, it is preferable that a material having a higher mass or a material having a higher mass and a higher specific gravity than the material of the second region86is used as the material of the first region85of the pin wheel45. This is due to the following reason. When the pin wheel45is rotated in order to raise the striking mechanism12, the moment of inertia in the rotation direction acts on the pin wheel45. Thus, by rotating the pin wheel45at high speed when the load of the electric motor15is light, for example, when the striking mechanism12is near the bottom dead center, the moment of inertia can be accumulated in the pin wheel45by the material having high mass of the first region85of the pin wheel45. Further, in the region where the load of the electric motor15is high and the rotation of the electric motor15is low because the striking mechanism12is near the top dead center or the region where the electric motor15is stopped, the load torque of the electric motor15can be further decreased by using the moment of inertia accumulated in the pin wheel45. Namely, in the rotation direction of the first region85of the pin wheel45, the pins45A to45H are arranged toward the inner side gradually in the radial direction, and thus the first region85of the pin wheel45is intentionally made of a material having a high mass. Therefore, the load torque of the electric motor15can be further decreased by the flywheel effect. Also, the protrusion amounts H1of the eight convex portions25A to25H provided to the driver blade25are gradually shortened as approaching to the piston24. Therefore, it is possible to smoothly engage and release the pins and the convex portions. FIG.7is an example of the characteristic showing the relationship between the load torque of the electric motor and the amount of movement of the striking mechanism. The amount of movement of the striking mechanism is the amount of movement from the standby position to the top dead center. The characteristic indicated by a solid line is the embodiment and the characteristic indicated by a broken line is the comparative example. It is supposed that the distance from the axis line to the centers of the pins is constant in the pin wheel of the comparative example. The increase amount of the load torque in the embodiment is smaller than the increase amount of the load torque in the comparative example. The increase amount of the load torque means the increase ratio of the load torque or the increase rate of the load torque. Another example of the pin wheel45and the driver blade25will be described with reference toFIG.8toFIG.10. Radii R2from respective centers of the pins45A to45E to the axis line A2are all the same. Radii R3from respective centers of the pins45F to45H to the axis line A2are all the same. The radius R3is smaller than the radius R2. Protrusion amounts H2of respective convex portions25A to25E provided to the driver blade25are all the same. Protrusion amounts H3of respective convex portions25F to25H are all the same. The protrusion amount H2is smaller than the protrusion amount H3. In the example shown inFIG.8,FIG.9, andFIG.10, the pin45F is engaged with and released from the convex portion25F, the pin45G is engaged with and released from the convex portion25G, and the pin45H is engaged with the convex portion25H during the period when the striking mechanism12is moved from the standby position to the top dead center. In the example shown inFIG.8,FIG.9, andFIG.10, the pins45A to45E are engaged with and released from the convex portions25A to25E during the period when the striking mechanism12is moved from the bottom dead center to the standby position. Therefore, the radii R3corresponding to the pins45F to45H to transmit the torque during the period when the striking mechanism12is moved from the standby position to the top dead center are shorter than the radii R2corresponding to the pins45A to45E to transmit the torque during the period when the striking mechanism12is moved from the bottom dead center to the standby position. Therefore, it is possible to suppress the load torque during the period when the striking mechanism12is moved from the standby position to the top dead center from being increased in comparison with the load torque during the period when the striking mechanism12is moved from the bottom dead center to the standby position. Still another example of the pin wheel45and the driver blade25will be described with reference toFIG.11. The pin wheel45shown inFIG.11includes the plate45J and the pins45A to45H provided in the rotation direction of the plate45J. The pins45A to45H are configured in the same manner as the pins45A to45H shown inFIG.3. The pin wheel45inFIG.11does not include the plate45K inFIG.2. The driver blade25and the plate45J are arranged at an interval in the direction of the axis line A2. Convex portions62A to62H are provided on a surface62of the driver blade25closer to the pin wheel45. The convex portions62A to62H are provided at constant intervals in the direction of the center line A1. As shown inFIG.12, protrusion amounts H4of the convex portions62A to62H from the surface62are all the same. When the driver blade25shown inFIG.11is used as the striking mechanism12inFIG.2, the pin45G is engaged with the convex portion62G, and the striking mechanism12is stopped at the standby position. Then, when the pin wheel45is rotated in the counterclockwise direction inFIG.11, the pin45H is engaged with the convex portion62H and the pin45G is then released from the convex portion62G, and the striking mechanism12reaches the top dead center. Further, when the pin45H is released from the convex portion62H, the striking mechanism12falls and strikes the fastener and the striking mechanism12reaches the bottom dead center. When the striking mechanism12reaches the bottom dead center and the pin wheel45is then rotated in the counterclockwise direction inFIG.11, the pin45A is engaged with the convex portion62A, and the striking mechanism12rises from the bottom dead center. When the pin45B is engaged with and released from the convex portion62B, the pin45C is engaged with and released from the convex portion62C, the pin45D is engaged with and released from the convex portion62D, the pin45E is engaged with and released from the convex portion62E, the pin45F is engaged with and released from the convex portion62F, the pin45G is engaged with the convex portion62G, and the striking mechanism12reaches the standby position, the pin wheel45is stopped. The same effect as that of the embodiment shown inFIG.3toFIG.8can be obtained also in the pin wheel45and the driver blade25shown inFIG.11. Second Embodiment A driver110shown inFIG.13includes a housing111, a striking mechanism112, a magazine113, an electric motor114, a conversion mechanism115, a control board116, a battery pack117, and a reaction absorption mechanism208. The housing111has a cylindrical body portion119, a handle120connected to the body portion119, and a motor case121connected to the body portion119. An attaching portion122is connected to the handle120and the motor case121. An injection portion123is provided outside the body portion119, and the injection portion123is fixed to the body portion119. The injection portion123has an injection path124. The user can hold the handle120with a hand and press the tip of the injection portion123to the workpiece W1. The magazine113is supported by the motor case121and the injection portion123. The motor case121is disposed between the handle120and the magazine113in a direction of a center line E1. The magazine113contains a plurality of fasteners125. Examples of the fasteners125include nails, and examples of the material of the fasteners125include metal, non-ferrous metal, and steel. The fasteners125are connected to each other by a connecting element. The connecting element may be any one of a wire, an adhesive, and a resin. The fastener125has a rod-like shape. The magazine113includes a feeder. The feeder sends the fastener125contained in the magazine113to the injection path124. The striking mechanism112is provided from the inside to the outside of the body portion119. The striking mechanism112includes a plunger126disposed in the body portion119and a driver blade127fixed to the plunger126. The plunger126is made of metal or synthetic resin. The driver blade127is made of metal. A guide shaft128is provided in the body portion119. The center line E1passes through the center of the guide shaft128. A material of the guide shaft128may be any one of metal, non-ferrous metal, and steel. As shown inFIG.13andFIG.14, a top holder129and a bottom holder130are fixed and provided in the housing111. A material of the top holder129and the bottom holder130may be any one of metal, non-ferrous metal, and steel. The guide shaft128is fixed to the top holder129and the bottom holder130. Guide bars are provided in the body portion119. Two guide bars are provided and the two guide bars are fixed to the top holder129and the bottom holder130. The two guide bars both have a plate-like shape and are disposed in parallel to the center line E1. The plunger126is attached to an outer circumferential surface of the guide shaft128, and the plunger126is operable in the direction of the center line E1along the guide shaft128. The guide shaft128positions the plunger126about the center line E1in the radial direction. The guide bar positions the plunger126about the center line E1in the circumferential direction. The driver blade127is operable in parallel to the center line E1together with the plunger126. The driver blade127is operable in the injection path124. The reaction absorption mechanism208absorbs the reaction received by the housing111. As shown inFIG.14andFIG.15, the reaction absorption mechanism208includes a cylindrical weight118and engaging portions200and201provided to the weight118. A material of the weight118may be any one of metal, non-ferrous metal, and steel. The weight118is attached to the guide shaft128. The weight118is operable in the direction of the center line E1along the guide shaft128. The guide shaft128positions the weight118with respect to the center line E1in the radial direction. The guide bar positions the weight118about the center line E1in the circumferential direction. A spring136is disposed in the body portion119, and the spring136is disposed between the plunger126and the weight118in the direction of the center line E1. For example, a compression coil spring made of metal may be used as the spring136. The spring136can expand and contract in the direction of the center line E1. A first end portion of the spring136in the direction of the center line E1is in direct or indirect contact with the plunger126. A second end portion of the spring136in the direction of the center line E1is in direct or indirect contact with the weight118. The spring136accumulates the elastic energy by receiving the compression force in the direction of the center line E1. The spring136is an example of a biasing mechanism configured to bias the striking mechanism112and the weight118. The plunger126receives the biasing force in a first direction D1approaching to the bottom holder130in the direction of the center line E1from the spring136. The weight118receives a biasing force in a second direction D2approaching to the top holder129in the direction of the center line E1from the spring136. The first direction D1and the second direction D2are opposite to each other, and the first direction D1and the second direction D2are parallel to the center line E1. The plunger126and the weight118receive a biasing force from the spring136that is physically the same element. A weight bumper137and a plunger bumper138are provided in the body portion119. The weight bumper137is disposed between the top holder129and the weight118. The plunger bumper138is disposed between the bottom holder130and the plunger126. The weight bumper137and the plunger138are both made of synthetic rubber. The driver110shown inFIG.13andFIG.14shows an example in which the center line E1is parallel to the vertical line. The operation in which the striking mechanism112, the plunger126, or the weight118is moved in the first direction D1is referred to as falling. The operation in which the striking mechanism112or the weight118is moved in the second direction D2is referred to as rising. The striking mechanism112and the weight118can reciprocate in the direction of the center line E1. The battery pack117shown inFIG.13can be detachably attached to the attaching portion122. The battery pack117includes a container case139and a plurality of battery cells contained in the container case139. The battery cell is a secondary battery that can be charged and discharged, and any of a lithium ion battery, a nickel hydride battery, a lithium ion polymer battery, and a nickel cadmium battery can be used as the battery cell. The battery pack117is a DC power source and the power of the battery pack117can be supplied to the electric motor114. The control board116shown inFIG.13is provided in the attaching portion122, and a controller140and an inverter circuit141shown inFIG.6are provided on the control board116. The controller140is a microcomputer including an input port, an output port, a central processing unit, and a memory unit. The inverter circuit141includes a plurality of switching elements, and each of the plurality of switching elements can be individually switched on and off. The controller140outputs a signal to control the inverter circuit141. An electric circuit is formed between the battery pack117and the electric motor114. The inverter circuit141is a part of the electric circuit and is configured to connect and disconnect the electric circuit. As shown inFIG.13, a trigger142and a trigger switch143are provided to the handle120, and the trigger switch143is turned on when the user applies an operation force to the trigger142. The trigger switch143is turned off when the user releases the operation force applied to the trigger142. A position detection sensor144is provided in the housing111. The position detection sensor144estimates the positions of the plunger126and the weight118in the direction of the center line E1based on, for example, a rotation angle of the electric motor114and outputs a signal. The driver110shown inFIG.13does not include the push switch60shown inFIG.6. The controller140receives the signal of the trigger switch143and the signal of the position detection sensor144, and outputs the signal to control the inverter circuit141. The electric motor114shown inFIG.13includes a rotor184and a stator145, and a motor shaft146is attached to the rotor184. The motor shaft146is rotated when the power is supplied from the battery pack117to the electric motor114. A speed reducer147is disposed in the motor case121. The speed reducer147includes several pairs of planetary gear mechanisms, an input element148, and an output element149. The input element148is connected to the motor shaft146. The electric motor114and the speed reducer147are concentrically disposed about the center line E1. The driver110shown inFIG.13shows an example in which an angle formed between the center line E1and an axis line E2is 90 degrees. The conversion mechanism115converts the rotational force of the output element149into the operation force of the striking mechanism112and the operation force of the weight118. The conversion mechanism115includes a first gear150, a second gear151, and a third gear152. A material of the first gear150, the second gear151, and the third gear152may be any one of metal, non-ferrous metal, and steel. A holder153is provided in the housing111, and the output element149is rotatably supported by the holder153. The first gear150is fixed to the output element149. The second gear151is rotatably supported by a supporting shaft154. The third gear152is rotatably supported by a supporting shaft155. The supporting shafts154and155are attached to the holder153. The first gear150is rotatable about the axis line E2, the second gear151is rotatable about an axis line E3, and the third gear152is rotatable about an axis line E4. As shown inFIG.14, the axis lines E2, E3, and E4are disposed at intervals in the direction of the center line E1. The axis line E3is disposed between the axis line E2and the axis line E4. The axis lines E2, E3, and E4are parallel to each other. The third gear152is disposed between the second gear151and the top holder129in the direction of the center line E1. The first gear150is disposed between the second gear151and the magazine113in the direction of the center line E1. As shown inFIG.15, an outer diameter of the first gear150, an outer diameter of the second gear151, and an outer diameter of the third gear152are the same. The second gear151is meshed with the first gear150and the third gear152. A cam roller157is provided to the first gear150, two cam rollers158and202are provided to the second gear151, and two cam rollers159and203are provided to the third gear152. The cam roller157can rotate with respect to the first gear150. The two cam rollers158and202are disposed on the same circumference about the axis line E3. Each of the two cam rollers158and202can rotate with respect to the second gear151. A virtual circle G1passing through the rotation center of the cam roller157has a radius R11. A virtual circle G2passing through the rotation centers of the cam rollers158and202has a radius R12. The virtual circle G1is centered on the axis line E2, and the virtual circle G2is centered on the axis line E3. The radius R12is smaller than the radius R11. The two cam rollers159and203can rotate with respect to the third gear152. A virtual circle G3passing through the cam roller159has a radius R13. A virtual circle G4passing through the cam roller203has a radius R14. The virtual circles G3and G4are both centered on the axis line E4. The radius R14is smaller than the radius R13. The radii R13and R14are smaller than the radius R12. As described above, the radius R11and the radius R12are different from each other, and the radius R13and the radius R14are different from each other. Examples of the material of the cam rollers157,158,159,202, and203include metal, non-ferrous metal, and steel. It is supposed that the cam rollers157,158,159,202, and203have a cylindrical shape and outer diameters of the cam rollers157,158,159,202, and203are all the same. When the power of the battery pack117is supplied to the electric motor114and the motor shaft146is rotated forward, the rotational force of the motor shaft146is transmitted to the first gear150through the speed reducer147. When the first gear150is rotated in the clockwise direction inFIG.15, the second gear151is rotated in the counterclockwise direction and the third gear152is rotated in the clockwise direction. As shown inFIG.15, engaging portions204,205, and206are provided to the plunger126. When the first gear150is rotated in the clockwise direction inFIG.15, the cam roller157can be engaged with and released from the engaging portion204. When the second gear151is rotated in the counterclockwise direction, the cam roller158can be engaged with and released from the engaging portion205and the cam roller202can be engaged with and released from the engaging portion206. When the third gear152is rotated in the clockwise direction, the cam roller159can be engaged with and released from the engaging portion200and the cam roller203can be engaged with and released from the engaging portion201. Next, an example of using the driver110will be described. When the controller140detects the trigger switch143being turned off, the controller140does not supply the power to the electric motor114and stops the motor shaft146. When the electric motor114is stopped, the plunger126is stopped at the position in contact with the plunger bumper138, that is, the bottom dead center as shown inFIG.14. Also, the weight118is biased by the elastic force of the spring136and is stopped at the position in contact with the weight bumper137, that is, the top dead center. The controller140estimates the positions of the plunger126and the weight118in the direction of the center line E1by processing the signal of the position detection sensor144. When the user presses the tip of the injection portion123to the workpiece W1and the controller140detects the trigger switch143being turned on, the controller140supplies the power to the electric motor114to rotate the motor shaft146forward. The rotational force of the motor shaft146is amplified by the speed reducer147and transmitted to the first gear150, and the first gear150is rotated in the clockwise direction as shown on the left side ofFIG.15. When the first gear150is rotated in the clockwise direction, the second gear151is rotated in the counterclockwise direction and the third gear152is rotated in the clockwise direction. When the first gear150is rotated in the clockwise direction and the cam roller157is engaged with the engaging portion204, the plunger126is operated in the second direction D2against the biasing force of the spring136as shown on the right side ofFIG.15. Namely, the striking mechanism112rises. Also, when the third gear152is rotated in the clockwise direction and the cam roller259is engaged with the engaging portion200, the weight118is operated in the first direction D1. Namely, the weight118falls as shown on the right side ofFIG.15. Further, in the state where the cam roller157is engaged with the engaging portion204, the cam roller158is engaged with the engaging portion205. Thereafter, the cam roller157is released from the engaging portion204. Also, as shown on the left side ofFIG.16, in the state where the cam roller158is engaged with the engaging portion205, the cam roller202is engaged with the engaging portion206. Therefore, the striking mechanism12further rises. Also, as shown on the right side ofFIG.15, in the state where the cam roller159is engaged with the engaging portion200, the cam roller203is engaged with the engaging portion201. Next, as shown on the left side ofFIG.16, the cam roller159is released from the engaging portion200. Therefore, the weight118further falls. Then, when the plunger126reaches the top dead center and the cam roller202is released from the engaging portion206as shown on the right side ofFIG.16, the plunger126falls by the biasing force of the spring136as shown inFIG.17. Also, when the weight118reaches the bottom dead center and the cam roller203is released from the engaging portion201as shown on the right side ofFIG.16, the weight118rises by the biasing force of the spring136as shown inFIG.17. When the plunger126falls, that is, the striking mechanism112falls, the driver blade127strikes the fastener125located in the injection path124. The fastener125is driven into the workpiece W1. After the driver blade127strikes the fastener125, the plunger126collides with the plunger bumper138. The plunger bumper138absorbs a part of the kinetic energy of the striking mechanism112. Also, the weight118collides with the weight bumper137. The weight bumper137absorbs a part of the kinetic energy of the reaction absorption mechanism208. As described above, when the striking mechanism112is operated in the first direction D1to strike the fastener125, the weight118is operated in the second direction D2opposite to the first direction D1. Therefore, it is possible to reduce the reaction at the time when the striking mechanism112strikes the fastener125. The controller140estimates the position of the plunger126in the direction of the center line E1and stops the electric motor114from when the plunger126starts to fall to when the plunger126collides with the plunger bumper138. Therefore, the plunger126is stopped at the bottom dead center in contact with the plunger bumper138, and the weight118is stopped at the top dead center in contact with the weight bumper137. Then, when the user releases the operation force to the trigger142and applies the operation force to the trigger142again, the controller140rotates the electric motor114, and the striking mechanism112and the weight118are operated in the same manner as described above. When the plunger126rises against the biasing force of the spring136, the element to transmit the torque of the electric motor114to the plunger126is switched from the cam roller157to the cam rollers158and202. Here, the radius R12is smaller than the radius R11. Therefore, when the striking mechanism112rises by the torque of the electric motor114, the arm of the moment becomes shorter as the striking mechanism112approaches to the top dead center. Accordingly, it is possible to suppress the increase in the load torque of the electric motor114when the striking mechanism112approaches to the top dead center. Note that the torque applied from the striking mechanism112to the first gear150is counterclockwise inFIG.15andFIG.16. Also, when the weight118falls against the biasing force of the spring136, the element to transmit the torque of the electric motor114to the weight118is switched from the cam roller159to the cam roller203. Here, the radius R14is smaller than the radius R13. Therefore, when the weight118falls by the torque of the electric motor114, the arm of the moment becomes shorter as the weight118approaches to the bottom dead center. Accordingly, it is possible to suppress the increase in the load torque of the electric motor114when the weight118approaches to the bottom dead center. Note that the torque applied from the reaction absorption mechanism208to the first gear150through the third gear152and the second gear151is counterclockwise inFIG.15andFIG.16. The driver110shown inFIG.18shows the example in which the reaction absorption mechanism208shown inFIG.13andFIG.14is not provided. The driver110shown inFIG.18can obtain the same function and effect as those of the driver110shown inFIG.13andFIG.14except the operation of the reaction absorption mechanism208. Third Embodiment FIG.19is a schematic diagram showing a driver according to the third embodiment. A driver70includes a housing71, an electric motor72, a cylinder73, a striking mechanism74, a cam75, a spring76, and a bumper77. The electric motor72, the cylinder73, the cam75, the spring76, and the bumper77are provided in the housing71. The cylinder73is fixed and provided in the housing71, and the striking mechanism74is movable in a direction of a center line A3. The striking mechanism74includes a piston80and a driver blade81. The spring76is a compression spring made of metal, and the spring76is disposed in the cylinder73in the compressed state. The spring76biases the striking mechanism74by the elastic restoring force in a first direction B3, that is, in the direction approaching to the bumper77.FIG.19shows the state where the piston80is pressed to the bumper77and the striking mechanism77is located at the bottom dead center. The cam75is attached to a rotary shaft78, and a clutch configured to connect and disconnect the power transmission path between the rotary shaft78and the electric motor72is provided. When the clutch is connected, the cam75is rotated in the counterclockwise direction by the torque of the electric motor72. A winding portion75A is formed on an outer circumferential surface of the cam75. A radius from an axis line A4to the winding portion75A, that is, a radius R4differs in the rotation direction of the cam75. A pair of guide rollers82is provided in the housing71. A first end portion of a wire79is connected to the cam75, and a second end portion of the wire79is connected to the piston80. The wire79passes between the pair of guide rollers82. A phase detection sensor configured to detect a phase of the cam75in the rotation direction is provided in the housing71. A controller configured to control the rotation and the stop of the electric motor72is provided in the housing71. The signal of the phase detection sensor is input to the controller. The controller controls the connection and the disconnection of the clutch. In the driver70inFIG.19, when the electric motor72is stopped, the striking mechanism74is pressed to the bumper77by the biasing force of the spring76and is stopped at the bottom dead center. When the electric motor72is rotated, the cam75is rotated in the counterclockwise direction inFIG.19and the wire79is wound around the winding portion75A and pulled. When the wire79is pulled, the striking mechanism74is moved in a second direction B4, that is, the striking mechanism74rises. The controller disconnects the clutch when the striking mechanism74reaches the top dead center. Then, the striking mechanism74falls by the force of the spring76and strikes the fastener. When the striking mechanism74falls, the wire79is drawn out from the winding portion75A. Thereafter, when the piston80collides with the bumper77, the controller stops the electric motor72and the striking mechanism74is stopped at the bottom dead center. When the cam75is rotated by the torque of the electric motor72to raise the striking mechanism74, the radius R4at a position P1where the wire79is wound around the winding portion75A becomes smaller as the striking mechanism74rises. Thus, the radius R4from the axis line A4to the position P1, that is, the arm of the moment becomes shorter as the striking mechanism74rises, and the pulling force transmitted from the cam75to the wire79is increased. Therefore, it is possible to suppress the increase in the load torque of the electric motor72when the striking mechanism74rises. The meanings of the matters described in the drivers according to the first to third embodiments will be described. The pin wheel45and the cam75are examples of a first rotational element. The first gear150and the second gear151are examples of a second rotational element, and the third gear152is an example of a third rotational element. The pressure chamber13and the springs76and136are examples of a first moving mechanism, and the electric motors15,72, and114are examples of a motor. The main body portion25K is an example of a first main body portion. The plunger126is an example of a second main body portion. The pin wheel45, the cam75, the first gear150, and the second gear151are examples of a second moving mechanism. The spring136is an example of a third moving mechanism. The third gear152and the cam rollers159and203are examples of a fourth moving mechanism. The pins45A to45H, the winding portion75A, and the cam rollers157,158,159,202, and203are examples of a torque suppression mechanism. The convex portions25A to25H and the convex portions62A to62H are examples of a plurality of first engaging portions. The pins45A to45H are examples of a plurality of second engaging portions. The engaging portions204,205, and206are examples of a third engaging portion. The cam rollers157,158and202are examples of a fourth engaging portion. The engaging portions200and201are examples of a fifth engaging portion. The cam rollers159and203are examples of a sixth engaging portion. The pins45F,45G, and45H are examples of a high load engaging portion, and the pins45A to45E are examples of a low load engaging portion. The top dead center is an example of a first position, and the bottom dead center is an example of a second position. The wire79is an example of a wire material, and the pins45A to45H and the winding portion75A are examples of a transmitter. The axis line A2is an example of a first axis line, and the axis lines E2and E3are examples of a second axis line. The axis line E4is an example of a third axis line. The radii R1, R2, R3, R4, R5, F6, R11, R12, R13, and R14are examples of a distance. The reaction absorption mechanism208is an example of a reaction absorption mechanism, and the weight118is an example of a weight. The driver is not limited to those described in the first to third embodiments, and can be modified in various ways with the scope of the embodiments. For example, in the first to third embodiments, examples of the motor to move the striking mechanism in the second direction include a hydraulic motor and a pneumatic motor in addition to the electric motor. The electric motor may be either a brush motor or a brushless motor. The power source of the electric motor may be either a DC power source or an AC power source. Examples of the rotational element include a gear, a pulley, and a rotary shaft in addition to the pin wheel and the cam. In the first embodiment, the protrusion amount of the first engaging portion with respect to the main body portion may be either the distance from the edge of the main body portion or the distance from the center line of the main body portion. The plurality of second engaging portions may be a plurality of teeth provided on an outer circumferential surface of the gear in addition to the plurality of pins provided to the rotational element. The distance from the axis line to the second engaging portion corresponds to the distance from the axis line to the tip of the tooth. In the description of the first embodiment with reference toFIG.3,FIG.4,FIG.5,FIG.8,FIG.9,FIG.10, andFIG.11, the pin wheel45is described as being rotated in the counterclockwise direction by the torque of the electric motor15. On the other hand, the torque applied from the striking mechanism12to the pin wheel45is described as being clockwise. In the second embodiment, the first moving mechanism and the third moving mechanism of the driver110may be separately provided or may be shared. In the driver110shown inFIG.14, the spring136has a role as the first moving mechanism that biases the striking mechanism112in the first direction D1and a role as the third moving mechanism that biases the reaction absorption mechanism208in the second direction D2. On the other hand, it is also possible to separately provide a metal spring as the first moving mechanism that biases the striking mechanism in the first direction and a metal spring as the third moving mechanism that biases the reaction absorption mechanism in the second direction. In the second embodiment, there may be one second rotational element rotated about the second axis line or there may be a plurality of second rotational elements. When there is one rotational element, a plurality of fourth engaging portions are all provided to the one second rotational element, and the second rotational element can be rotated about one second axis line. When there are a plurality of second rotational elements, the fourth engaging portions are respectively provided to the plurality of second rotational elements. The plurality of second rotational elements can be rotated about respectively different second axis lines. One or more fourth engaging portions are respectively provided to the plurality of second rotational elements. The fourth engaging portions respectively provided to the plurality of second rotational elements have the different distances from the corresponding second axis lines which are the centers of the respective second rotational elements. Note that, when the plurality of fourth engaging portions are provided to one second rotational element, the distances from the second axis line which is the center of the second rotational element to the fourth engaging elements may be the same or different. Further, it is also possible to adopt the configuration in which the rotation directions of the plurality of second rotational elements are the same in the driver according to the second embodiment. This can be implemented by, for example, winding a timing belt to the plurality of second rotational elements. In this case, the positions of the engaging portions provided to the second rotational elements, the radii of the engaging portions disposed in the second rotational elements, and the positions of the engaging portions provided to the striking mechanism are arbitrarily designed. In the description of the second embodiment with reference toFIG.15,FIG.16, andFIG.17, the example in which the first gear150is rotated in the clockwise direction by the torque of the electric motor114is described. On the other hand, the example in which the torque applied from the striking mechanism112to the first gear150is counterclockwise is described. In the third embodiment, examples of the wire material include a wire, a cable, and a rope. In the third embodiment, the wire material may be wound around a pulley between the cam and the striking mechanism. In the description of the third embodiment with reference toFIG.19, the example in which the cam75is rotated in the counterclockwise direction by the torque of the electric motor72is described. On the other hand, the example in which the torque applied from the striking mechanism74to the cam75is clockwise is described. In the drawings for describing the first, second, and third embodiments, the clockwise direction and the counterclockwise direction are definitions used for convenience and other directions may be used as long as the directions are opposite directions. Examples of the first moving mechanism configured to move the striking mechanism in the first direction include a gas spring, a metal spring, a non-ferrous metal spring, a magnetic spring, and a synthetic rubber. The pressure chamber13described in the first embodiment is an example of the gas spring. The metal spring and the non-ferrous metal spring may be either a compression spring or a tension spring. Examples of the metal described in the first, second, and third embodiments include iron and steel. Examples of the non-ferrous metal described in the first, second, and third embodiments include aluminum. The magnetic spring moves the striking mechanism in the first direction by the repulsive force between the same poles of the magnets. The synthetic rubber moves the striking mechanism in the first direction by the repulsive force of the synthetic rubber. The magnetic spring or the synthetic rubber is provided in the housing. Further, the second moving mechanism may be configured by combining power transmission elements such as a pulley, a sprocket, a chain, a wire, a cable and others. The fourth moving mechanism may be configured by combining power transmission elements such as a pulley, a sprocket, a chain, a wire, a cable and others. Further, the first moving mechanism may be defined as a first biasing mechanism and the second moving mechanism may be defined as a second biasing mechanism. Moreover, the third moving mechanism may be defined as a third biasing mechanism and the fourth moving mechanism may be defined as a fourth biasing mechanism. The striking mechanism can be stopped at the standby position, and it is also possible to set the bottom dead center as the standby position of the striking mechanism. Further, examples of the workpiece include a floor, a wall, a ceiling, a post, and a roof. Examples of a material of the workpiece include a wood, a concrete, and a plaster. REFERENCE SIGNS LIST 10,70,110. . . driver,12,74,112. . . striking mechanism,13. . . pressure chamber,15,72,114. . . electric motor,25K . . . main body portion,25A to25H,62A to62H . . . convex portion,45. . . pin wheel,45A to45H . . . pin,75. . . cam,75A . . . winding portion,76,136. . . spring,79. . . wire,85. . . first region,86. . . second region,118. . . weight,126. . . plunger,150. . . first gear,151. . . second gear,152. . . third gear,157,158,159,202,203. . . cam roller,200,201,204,205,206. . . engaging portion,208. . . reaction absorption mechanism, A2, A4, E2, E3, E4. . . axis line, B1, B3, D1. . . first direction, B2, B4, D2. . . second direction, H1, H2. . . protrusion amount, R1, R2, R3, R4, R5, R6, R11, R12, R13, R14. . . radius | 59,716 |
11858102 | DETAILED DESCRIPTION Identical or functionally identical elements are indicated by the same reference signs in the figures, unless stated otherwise. FIG.1schematically shows a hammer drill1as an example of a portable hand-held power tool. The illustrative hammer drill1has a tool holder2, into which a tool3can be inserted and locked. The tool3is for example a drill bit, a chisel etc. The embodiment illustrated by way of example turns the tool holder2about a working axis4and at the same time exerts periodically impacts on the tool along the working axis4. The hand-held power tool1can have a mode selector switch5, which allows the user to selectively activate and deactivate the rotational movement and selectively activate and deactivate the percussive operation. The user can put the hand-held power tool1into operation by means of a monostable operating button6. The hand-held power tool1has a handle7. The user can hold and guide the hand-held power tool1during operation by way of the handle7. The operating button6is preferably attached to the handle7in such a way that the user can operate the operating button6using the hand holding the handle7. The handle7can be decoupled from a machine housing8by way of damping elements. The hand-held power tool1has a rotary drive9, which is coupled to the tool holder2. Among other things, the rotary drive9can have a step-down gear mechanism10and a slip clutch11. An output shaft12of the rotary drive9is connected to the tool holder2. The rotary drive9is coupled to an electric motor13. The user can switch the electric motor13on and off by actuating the operating button6, wherein the operating button6accordingly controls a power supply to the electric motor13. In one embodiment, a rotational speed of the electric motor13can be set by way of the operating button6. The hand-held power tool1has a pneumatic impact mechanism14. The pneumatic impact mechanism14has an exciter piston15and an impact piston16. The exciter piston15is rigidly coupled to the electric motor13. An eccentric wheel17and a connecting rod18convert the rotational movement of the electric motor13into a movement in translation on the working axis4. The exciter piston15and the impact piston16close off a pneumatic chamber19between one another. In the illustrated embodiment, radial closure of the pneumatic chamber19is provided by a guide tube20, which at the same time guides the exciter piston15and the impact piston. In other embodiments, the impact piston can be of hollow design and the exciter piston15is guided in the impact piston, or vice versa. The air enclosed in the pneumatic chamber19is compressed and decompressed by the exciter piston15. The changes in pressure couple the impact piston to the movement of the exciter piston15, and the pneumatic chamber19behaves in a similar manner to a spring, and is therefore also referred to as a pneumatic spring. The impact piston16can strike the tool3directly or strike the tool indirectly by way of an anvil21. The hand-held power tool1is switched on and off by the operating button6. The operating button6is arranged in the handle7. The operating button6has a switching cap22, which the user can grip. In a rest position of the operating button6, the switching cap22protrudes from the handle7counter to a switching direction23(FIG.2). The switching cap22bears preferably against a stop24of the machine housing8. The user can press the switching cap22in the switching direction23into a pressed switching position (FIG.3). In the process, the switching cap22can slide or pivot into the handle7. The switching cap22can, as illustrated in the illustrated example, be pivotable about a bearing point or be guided in a linear manner. The switching cap22is at a distance from the stop24. A restoring element25, for example a helical spring, applies a force to the switching cap22, said force acting counter to the switching direction23. The restoring element25is tensioned to a greater extent in the pressed switching position than in the rest position, with the result that the switching cap22is stable only in the rest position. The switching cap22returns to the rest position when the user releases the switching cap22. The switching direction23is preferably antiparallel to the working direction26in which the tool3is directed. The switching cap22is coupled to a switching mechanism27of the operating button6. The switching mechanism27deactivates the electric motor13when the switching cap22is in the rest position. The switching mechanism27activates the electric motor13when the switching cap22is in the pressed position. The switching mechanism27may contain an electromechanical, optical, magnetic or other sensor for determining the position of the switching cap22. In one embodiment, the switching mechanism27can set a rotational speed or power consumption of the electric motor13depending on positions that are pressed to different extents. The hand-held power tool1has a locking switch28. The locking switch has a releasing position (FIG.2;FIG.3) and a locking position (FIG.4). The locking switch28has a pivotable catch29, which engages in the switching cap22in the locking position. The catch29arrests the movement of the switching cap22counter to the switching direction23and therefore prevents the switching cap22from returning into the rest position. The operating button6remains in the pressed switching position. The electric motor13remains activated, even if the user releases the switching cap22. The catch29interacts with the switching cap22. The switching cap22has a blocking face30, on which the catch29can bear in the locking position. The blocking face30can be realized by the outer contour of the switching cap22or by an externally accessible rib or the like. The blocking face30is preferably largely perpendicular to the switching direction23. The blocking face30is directed counter to the switching direction23and toward the catch29. The catch29is pivotable in a pivoting direction31that is perpendicular to the switching direction23. The catch29can be pivoted in the switching direction23between a first position, which is associated with the releasing position, and a second position, which is associated with the locking position. The catch29does not overlap the blocking face30in the releasing position. The overlap relates to the switching direction23, i.e. the overlap can be determined perpendicular to the switching direction23in projection onto a plane. The catch29overlaps the blocking face30in the locking position. A tip32of the catch29bears on the blocking face30in the switching direction23. In a similar manner to the gripping hand, the tip32exerts an opposing force to the restoring element, with the result that the operating button6remains pressed. The position of the tip32along the switching direction23corresponds to the position of the blocking face30along the switching direction23with the operating button6pressed. The tip32can project beyond the blocking face30in the switching direction23when the operating button6is in the rest position. The locking switch28is inoperable when the hand-held power tool1is switched off. The catch29is suspended for example on a resilient spring33. The spring33can be realized for example by a leaf spring, which is connected at one end34to the machine housing8. The spring33exerts a force acting counter to the pivoting direction31on the catch29. The spring33can be relaxed, i.e. force-free, in the releasing position. The spring33is tensioned to a greater extent in the locking position than in the releasing position. The catch29has a tendency to move automatically from the locking position into the releasing position. The locking switch28has an actuating knob35, which is able to be gripped by the user. The user can move the actuating knob35in a sliding direction36between a first position and a second position. The first position is associated with the releasing position of the locking switch28and the second position is associated with the locking position of the locking switch28. The sliding direction36is preferably parallel to the switching direction23of the operating button6. A slotted guide37couples the actuating knob35to the catch29. The slotted guide37has a top side with an inclined slotted-guide face38. The top side points counter to the pivoting direction31, for example faces the actuating knob35. The inclined slotted-guide face38is inclined with respect to the switching direction23. The inclined slotted-guide face38is preferably inclined with respect to the switching direction23in the releasing position and the locking position. The inclined slotted-guide face38descends in the pivoting direction31along the switching direction23. The actuating knob35is rigidly connected to a finger39, which presses against the inclined slotted-guide face38. The finger39moves the slotted guide37in the pivoting direction31when the actuating knob35is moved into the locking position. The catch29can press against the finger39in a manner preloaded by the spring33, both in the releasing position and in the locking position. The catch29returns from the locking position into the releasing position by itself when the contact pressure exerted by the finger39decreases as it moves along the inclined slotted-guide face38. The slotted guide37has a bottom side with an inclined slotted-guide face40. The bottom side points in the pivoting direction31, for example faces away from the actuating knob35. The inclined slotted-guide face40is inclined with respect to the switching direction23. The inclined slotted-guide face40ascends in the pivoting direction31along the switching direction23. The inclined slotted-guide face40is preferably inclined with respect to the switching direction23in the releasing position and the locking position. The actuating knob35is rigidly connected to a sliding block41, which, engaging behind the slotted guide37, can bear on the inclined slotted-guide face40. The sliding block41can lift the catch29out of the locking position in order to support the spring33. The catch29, the slotted guide37and the spring33are preferably in the form of a leaf spring. FIG.7shows the slotted guide37and the catch29in a view onto the bottom side. The position of the locking switch28corresponds toFIG.4. In the locking position, the sliding block41is preferably disengaged from the slotted guide37. The engagement of the sliding block41is realized for example by a narrower portion42of the slotted guide37. The narrower portion42, for example a cutout in the leaf spring, adjoins the inclined slotted-guide face40. The slotted-guide face40laterally overlaps the sliding block41. In a corresponding manner, the sliding block41can bear on the slotted-guide face40in the releasing position, as illustrated inFIG.3andFIG.5. The narrower portion42forms an opening, and when the sliding block41is located therein, it is not in contact with the slotted-guide face40. The arrangement and dimensions of the narrower portion42are chosen such that the sliding block41does not overlap the portion42when the locking switch28is in the locking position (cf.FIG.6). | 11,159 |
11858103 | DESCRIPTIVE KEY 10auxiliary handle15upper handle20ergonomic hand grip25ball and socket joint30multi-axis travel path “t”35main shaft40shaft abutment42angled section45front clamp50fastener55angular displacement “a”55angular displacement “b”60rear clamp65captive fastener70hydraulic tamp75tamping foot80reciprocating mechanism85shaft handle90hydraulic connection hose95user100earth DESCRIPTION OF THE PREFERRED EMBODIMENTS The best mode for carrying out the invention is presented in terms of its preferred embodiment, herein depicted withinFIGS.1through5. However, the invention is not limited to the described embodiment, and a person skilled in the art will appreciate that many other embodiments of the invention are possible without deviating from the basic concept of the invention and that any such work around will also fall under scope of this invention. It is envisioned that other styles and configurations of the present invention can be easily incorporated into the teachings of the present invention, and only one (1) particular configuration shall be shown and described for purposes of clarity and disclosure and not by way of limitation of scope. All of the implementations described below are exemplary implementations provided to enable persons skilled in the art to make or use the embodiments of the disclosure and are not intended to limit the scope of the disclosure, which is defined by the claims. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one (1) of the referenced items. 1. DETAILED DESCRIPTION OF THE FIGURES Referring now toFIG.1, a front view of the auxiliary handle10, according to the preferred embodiment of the present invention is disclosed. The auxiliary handle10(herein also described as the “device”)10, provides an additional means of holding and utilizing a hydraulic tamp70to reduce contact with the flexible hydraulic connection hoses90by the user. The device10includes a generally “T”-shaped upper handle15equipped with two (2) ergonomic hand grips20to provide for increased user comfort. The center portion of the upper handle15terminates in a ball and socket joint25that allows for angular displacement of upper handle15along a multi-axis travel path “t”30. The opposite side of the ball and socket joint25comprises a main shaft35with the approximate dimensions of eight to ten inches (8-10 in.). The lower end of the main shaft35terminates in an angled section42. A shaft abutment40is the portion of the angled section42that is placed against the shaft handle85of the hydraulic tamp70and clamped thereto with the aid of a fasteners50such as bolts, screws, rivets or the like. The device10would be made of metal, such as steel and/or aluminum for strength. Referring next toFIG.2, a side view of the device10, according to the preferred embodiment of the present invention is depicted. The side view provides further clarification of the upper handle15, the ergonomic hand grips20, the ball and socket joint25, the main shaft35and its angular displacement “b”57with relation to the angled section42, and the angular displacement “a”55between the angled section42and the shaft abutment40. The angular displacement “a”55between the angled section42and the shaft abutment40is envisioned to vary between thirty to sixty degrees (30-60°) dependent on user and manufacturing preferences. Similarly, the angular displacement “b”57between the main shaft35and the angled section42is also envisioned to vary between thirty to sixty degrees (30-60°), essentially to be identical to angular displacement “a”55. This is to ensure that the main shaft30is essentially oriented parallel with the shaft abutment40and the shaft handle85of the hydraulic tamp70. The exact amount of angular displacement “b”55and angular displacement “a”55is not intended to be a limiting factor of the present invention. The front clamp45is used in conjunction with a rear clamp60to attach the device10to the hydraulic tamp70. Further details on said attachment will be provided herein below. The two (2) fasteners50(of which only one (1) is shown due to illustrative limitations) is provided with a captive fastener65such as a nut (as shown), threads, backing plate, or the like. Thusly, the front clamp45and rear clamp60, working in conjunction with the fasteners50and the captive fastener65, provide a mechanical connection to the hydraulic tamp70that is structurally significant without requiring modification to the hydraulic tamp70. Referring now toFIG.3, a pictorial view of the device10shown in an installed state on a hydraulic tamp70. according to the preferred embodiment of the present invention is shown. The hydraulic tamp70is of a conventional design including a tamping foot75, a reciprocating mechanism80, a shaft handle85, and hydraulic connection hoses90. The fasteners50attaches to the shaft handle85with the aid of the front clamp45and the rear clamp60. This places the upper handle15at the upper end of the main shaft35at an elevated position allowing for easy handling and use of the hydraulic tamp70should it be used in an excavated hole or trench. It also provides for an additional contact point for the hydraulic tamp70allowing for increased leverage and ease of handling. It is also viewed as ergonomically friendly in a manner that may help reduce repetitive stress injuries associated with long term usage of a hydraulic tamp70. Referring next toFIG.4, a sectional view of the device10, as seen along a line I-I, as shown inFIG.3, according to the preferred embodiment of the present invention is disclosed. The front clamp45is affixed to a bottom portion of the angled section42, immediately subjacent to the shaft abutment40. The front clamp45and the rear clamp60are placed along the exterior surface of the shaft handle85at one-hundred-eighty-degrees (180°) opposite to each other. The fasteners50and the captive fastener65are then used to tighten the front clamp45and the rear clamp60together forming a friction fit about the shaft handle85. As aforementioned described, this arrangement does not require any modification to the shaft handle85allowing it to be removed at a time in the future without leaving any tell-tale marks behind. Additionally, the arrangement of the front clamp45and the rear clamp60can be rotated three hundred-sixty degrees (360°) about the shaft handle85allowing for positioning both at angular position and height to suit the user's preferences. The shape of the shaft abutment40may be planar or curvilinear (i.e., concave with respect to the shaft handle85). The universal sizing arrangement provided by the front clamp45and the rear clamp60allow for use on almost all makes and models of hydraulic tamp70(as shown inFIG.3). Referring toFIG.5, a pictorial view of the device10, shown in a utilized state, according to the preferred embodiment of the present invention is depicted. A user95is using the hydraulic tamp70equipped with an auxiliary handle for hydraulic tamper10. This arrangement allows the user95to grip the upper portion of the shaft handle85in one (1) hand, while the other hand is used to grip the upper handle15. This arrangement places both hands at or near the same elevation resulting in a more relaxed stance during use versus the conventional stance of one (1) hand in a high position and one hand in a low position, (both on the shaft handle85). Note that the device10can be used in either the right (as shown) or the left hand of the user95to suit individual preferences and work conditions. 2. OPERATION OF THE PREFERRED EMBODIMENT The preferred embodiment of the present invention can be utilized by the common user in a simple and effortless manner with little or no training. It is envisioned that the device10would be constructed in general accordance withFIG.1throughFIG.5. The user95would procure the device10from conventional procurement channels such as industrial supply houses, construction equipment suppliers, mechanical supply houses, mail order and internet supply houses and the like. After procurement and prior to utilization, the device10would be prepared in the following manner: the device10would be attached to the hydraulic tamp70by placing the front clamp45and the rear clamp60around the shaft handle85at the desired location and securing with the fasteners50and the captive fastener65. At this point in time, the device10is ready for usage. During utilization of the device10, the following procedure would be initiated: the user95would hold the hydraulic tamp70by both the shaft handle85and the upper handle15as shown inFIG.5; it may be held solely by the upper handle15as well, especially when the hydraulic tamp70is being used at a low elevation in comparison to the user95such as in a trench, or at the low point of an excavation. The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. | 9,478 |
11858104 | Identical or functionally identical elements are indicated by the same reference numerals in the figures, unless stated otherwise. DETAILED DESCRIPTION FIG.1schematically shows a hammer drill as an example of a portable power chiseling tool1. The hammer drill has a tool holder2into which a tool3can be inserted and locked. The tools3can be, for example, drill bits for chiseling mineral construction materials, such as concrete or rock, by turning, or chisels for purely chiseling the same construction materials. The hammer drill1contains a pneumatic striking mechanism4, which, during operation, periodically exerts blows in the striking direction5on the tool3. In addition, the hammer drill1contains an output shaft6, which, during operation, rotates the tool holder2and therefore the tool3about a working axis7. The striking mechanism4and the output shaft6are driven by a motor8, for example an electric motor. The output shaft6can be switched off in portable power chiseling tools1or in purely chiseling portable power tools1are without an output shaft. The portable power tool1has a handle9with which the user can hold and guide the portable power tool1during operation. The handle9is fastened to a machine housing10. The handle9is preferably arranged at an end of the portable power tool1or of the machine housing10that is remote from the tool holder2. A working axis7running parallel to the striking direction5and centrally through the tool holder2preferably runs through the handle9when the latter has to be grasped by one hand. The handle9can be partially decoupled from the machine housing10by damping elements in order to damp vibrations of the striking mechanism4. The user can put the portable power tool1into operation by means of a switch12. Actuation of the switch12activates the motor8. The switch12is preferably arranged on the handle9, as a result of which the latter can be actuated by the hand grasping the handle9. The striking mechanism4has an exciter piston13, a striker14and an anvil15. The exciter piston13, the striker14and the anvil15are arranged lying on the working axis7following one another in the striking direction5. The exciter piston13is coupled to the motor8via a gear train. The gear train converts the rotational movement of the motor8into a periodic forward and back movement of the exciter piston13on the working axis7. An exemplary gear train is based on an eccentric gear16and a connecting rod17. Another design is based on a wobble drive. The striker14is coupled to the movement of the exciter piston13by a pneumatic chamber18, also referred to as an air spring. The pneumatic chamber18is closed along the working axis7by the exciter piston13on the drive side and by the striker14on the tool side. For this purpose, the striker14is in the form of a piston. In the variant illustrated, the pneumatic chamber18is closed in the radial direction by a guide tube19. The exciter piston13and the striker14slide in an air-tight manner lying against the inner surface of the guide tube19. In another refinement, the exciter piston can be designed in the form of a cup. The striker slides within the exciter piston. The striker can analogously be designed in the form of a cup, with the exciter piston sliding within the striker. The striker14, coupled via the pneumatic chamber18, periodically moves parallel to the striking direction5between a drive-side reversing point and a tool-side reversing point. The tool-side reversing point is predetermined by the anvil15against which the striker14strikes in the tool-side reversing point. The anvil15is guided movably parallel to the striking direction5between a stop20and the tool3. During operation, when the tool3is pressed against an underlying surface, the user pushes the tool3against the anvil15and indirectly pushes the anvil15against the stop20. The position of the anvil15lying against the stop20is referred to as the working position. The striker14strikes against the anvil15preferably when the anvil15is in the working position. The anvil15serves to pass the blow of the striker14onto the tool3. Damping of the impact by the anvil15is not desirable. FIG.2shows an exemplary embodiment of the anvil15. The anvil15slides in a tubular guide21on the working axis7. The working axis7is determined by the cylindrical inner surface22of the guide21. The inner surface22is arranged coaxially with the working axis7. The anvil15has a cylindrical lateral surface23, which rests against the inner surface22. The lateral surface23typically defines the largest diameter of the anvil15. Moreover, the lateral surface23defines a longitudinal axis or anvil axis24of the anvil15. The anvil axis24corresponds to the axis of symmetry of the lateral surface23. By virtue of the guide21of the anvil15over the guiding lateral surface23, the anvil axis lies24on the working axis7. The anvil15has a striking surface25, which faces in the direction of the striker14. The striker14strikes the striking surface25. The surface area of the striking surface25is typically less than the surface area of a cross section in the region of the guiding lateral surface23. The striking surface25is preferably rotationally symmetrical with respect to the anvil axis24. Thus, the striker14strikes centrally on the striking surface25, thereby ensuring more efficient energy transfer. The striking surface25can be of a flat design, although a convex configuration is preferred. In the embodiment illustrated, the striking surface25is adjoined by a cylindrical section, the diameter of which corresponds to the diameter of the striking surface25. The anvil15has an impact surface26, which faces in the direction of the tool3, i.e. in the striking direction5and faces away from the striker14. The anvil15rests by means of the impact surface26against the tool3or strikes by means of the impact surface26on the tool3. The surface area of the impact surface26is typically less than the surface area of a cross section in the region of the guiding lateral surface23. The striking surface25is rotationally symmetrical with respect to the anvil axis24. Impact transfer from the anvil15to the tool3is performed centrally by the impact surface26. The impact surface26can be flat or convex. In the embodiment illustrated, the impact surface26is adjoined by a cylindrical section27, the diameter of which corresponds to the diameter of the impact surface26. In the working position, the anvil15rests against the stop20. The stop20can be designed as a ring, for example. The ring has an inside diameter which is somewhat larger than the diameter of the striking surface25. The anvil15has a (recoil impact) surface28. The recoil impact surface28preferably has a conical shape. In the region of the recoil impact surface28, the diameter of the anvil15increases uniformly along the anvil axis24from the smaller diameter of the striking surface25to the diameter of the guiding lateral surface23. The recoil impact surface28is rotationally symmetrical with respect to the anvil axis24. A slope of the recoil impact surface28relative to the anvil axis24and hence also relative to the working axis7is preferably constant along the anvil axis24. The stop20can have a likewise conical surface facing the recoil impact surface28. The stop20can be supported in the machine housing10via a damper element29, e.g. a flexible O-ring. In the chiseling mode, the anvil15moves only slightly out of its working position. After a strike by the striker14on the anvil15, the anvil15moves no further than the tool3out of the tool holder2. Owing to the contact pressure of the user, the tool3is pushed back into the tool receptacle until the anvil15is resting against the stop20. If a tool3is missing or if the tool3is not pressed into contact, the anvil15moves significantly out of the working position. An (idle strike) catcher30stops the anvil15in the striking direction5. The anvil15strikes by means of an end face31on the catcher30. The anvil15is then situated in its forwardmost position in the striking direction5. The anvil15is tilted somewhat relative to the guide21when the anvil15strikes against the idle strike catcher30, i.e. the anvil axis24is tilted relative to the working axis7. The tilting causes jamming of the anvil15in the guide21, thereby dissipating kinetic energy of the anvil15, and the anvil15preferably comes to a halt. The tilting is achieved by a special asymmetry of the end face31of the anvil15. The end face31faces in the striking direction5and slopes relative to the anvil axis24. The end face31connects the lateral surface23to the impact surface26. In the region of the end face31, the diameter of the anvil15decreases from the maximum diameter of the guiding lateral surface23to the diameter of the impact surface26. The special feature of the end face31is its subdivision in the circumferential direction32into a first segment33and a second segment34. In the exemplary embodiment, both segments33,34can be conical. The first segment33is offset in the striking direction5relative to the second segment34. The two segments33,34slope relative to the anvil axis24and the working axis7, preferably at a same slope. The offset is evident from the fact that, for a cut-out of the end face31at a constant radial distance from the working axis7, the portion of the cut-out belonging to the first segment33is closer to the impact surface26than the portion of the cut-out belonging to the second segment34. The first segment33thus makes contact first in the striking direction5. In one exemplary embodiment, a portion of the first segment33lies in the region of 200 degrees to 270 degrees. The second segment34is preferably conical. An axis of the complete cone which forms the second segment34preferably coincides with the anvil axis24. The first segment33can likewise be of conical design. A corresponding axis does not coincide with the anvil axis24. The axis can be offset in parallel with or tilted relative to the anvil axis24. In each cross section perpendicular to the working axis7, a radius of curvature r1of the first segment33is greater than the radius of curvature r2of the second segment. The shallower first segment33can take up a larger proportion of the circumference than the steeper second segment34. The idle strike catcher30is formed by a conical narrowing of the guide21, for example. The narrowing has an inside diameter which is greater than the diameter of the impact surface26of the anvil15but less than the diameter of the lateral surface23of the anvil15. The narrowing has a conical inner surface37, which faces in the direction of the anvil15. The conical inner surface37is preferably rotationally symmetrical with respect to the working axis7. The front, first segment34results in a larger radial force component as compared with the shallow segment33. The anvil15is tilted or bent as a result. Both effects lead to efficient braking of the anvil15. This also occurs if the guide21of the anvil15already has a relatively large clearance parallel to the working axis7owing to wear. The guide21can be rigidly anchored in the machine housing10. The exemplary guide21is suspended in a damped manner in the striking direction5. The guide21can be located in a sliding bearing38, for example. A damping element39, e.g. an elastomer, is clamped between a stop40fixed in relation to the housing, and a projection41. The stop40is arranged ahead of the projection41in the striking direction5. In one embodiment, the first segment33can be formed by a flat or almost flat bevel. A radius of curvature r1of the first segment33is accordingly very large. In this embodiment, the first segment33makes up a smaller proportion of the circumference, e.g. between 30 degrees and 45 degrees. | 11,758 |
11858105 | All figures are schematic, not necessarily to scale and generally only show parts which are necessary in order to elucidate the invention, wherein other parts may be omitted or merely suggested. DETAILED DESCRIPTION FIG.1is a cross sectional view of a portion of an exemplary power tool according to one embodiment, in this case a handheld battery powered tool. The tool comprises a housing10, an input shaft20, a motor (not shown) connected to the input shaft, an output shaft15and a two-speed transmission arranged between the input shaft and the output shaft. Further, the tool comprises a torque transducer and a control unit operative to control the rotational speed of the motor which will be described in greater detail below when describing the functionality of the tool. The two-speed power transmission1of the embodiment shown inFIG.1comprises a planetary gear18and a torque responsive gear shift mechanism19for directing torque from the input shaft20(i.e. from the motor) to the output shaft15through the planetary gear18in a high torque/low speed drive mode or past the planetary gear18in a low torque/high speed drive mode. The transmission is shown inFIG.1in the low torque/high speed drive mode. The planetary gear18comprises a sun wheel21connected to the input shaft20, a ring gear (or gear rim)22secured in the housing10and a planet wheel carrier24. The gear shift mechanism19comprises a driving member26connected to the sun wheel21of the planetary gear18and a driven member27connected to the output shaft15. Coupling elements, in the illustrated embodiment three balls30, are arranged to intercouple in a first position the driving member26and the driven member27, i.e. in what is referred to above as the low torque/high speed drive mode, and to intercouple in a second position the planet wheel carrier24and the driven member27, i.e. in what is referred to above as the high torque/low speed drive mode. Further, the driven member27comprises a number of axially extending grooves36arranged to support the balls30for axial displacement between the first and the second position, whereas the driving member26comprises an axially acting first cam means35arranged in equally spaced recesses39for cooperation with the coupling elements30in the first position of the coupling elements30. These recesses39and cam means35are shown inFIG.2and will therefore be described in greater detail below. A first axially acting coil spring31is coaxially arranged with respect to the driven member for biasing the balls30towards the first position, whereby the action of the coil spring31thereby counteracts the axial force developed by the first cam means35on the balls30. Hereby, the balls30are maintained in the first position at torque values below a predetermined level but forced out of the first position by the first cam means35at torque values above the predetermined level. In the illustrated embodiment, the spring31bears directly against the balls30. The driven member27in turn comprises second, axially acting cam means36barranged to exert an axial shifting force upon the balls30toward the second position of the balls30against the biasing action of the coil spring31as the balls30have left the first position at torque values above the predetermined level. Further, in the illustrated embodiment, the planet wheel carrier24is coupled to an axially movable coupling sleeve29which provides a radial support for the balls30in the second position. Therefore, a number of axially extending tracks38for cooperation with the balls30are arranged in an inner surface of the coupling sleeve29(discussed in further detail below with reference toFIG.3). The number of tracks38in the illustrated embodiment the coupling sleeve29is twice the number of balls30, i.e. six in the illustrated embodiment. Further, the coupling sleeve29is in the illustrated embodiment biased against the balls30by second axially acting coil spring40, the coil spring40being coaxially arranged with respect to the driven member27as well as to the first coil spring31. In order to intercouple the planet wheel carrier24and the driven member27in the second position, the planet wheel carrier24comprises an outer sleeve32. This sleeve32extends in an axial direction and is rotationally locked to the coupling sleeve29by means of a number of smaller balls32a. Further, in order to handle the forces from the first coil spring31acting on the balls30, a first axial bearing33is provided supporting the driving element, or member,26against the planet wheel carrier24and a second axial bearing34is provided to support this outer sleeve32against the housing (10), such that the force may be absorbed by the housing. Turning toFIG.2, the driving member26and three coupling balls30may be viewed in greater detail. In order to receive and radially support the balls30in the first position the driving member comprises a number of recesses39, each adapted to receive one ball30. As may also be seen fromFIG.2, the first axially acting cam element(s)35each form part of respectively one of the recesses, for example the cam elements may comprise sloping side portions of the recesses39. Further, the driving member26comprises an axial flange26barranged to provide further radially supports. In order to ensure that the balls do not make radial contact with surrounding components rotating at a different rotational speed, the combined axial extent of this flange26band the depth of the recesses39is larger than the radius of the balls30. Finally, the coupling sleeve29is shown in greater detail inFIG.3, a perspective view of an exemplary embodiment of the coupling sleeve and again three balls30. InFIG.3, the balls are arranged in the second mode and hence arranged in the tracks38, formed between equally spaced ridges38aformed in an inner surface of the sleeve29. In the outer surface of the sleeve29, tracks29aarranged to receive the smaller balls providing the rotational lock to the planet carrier24are shown. The second coil spring40mentioned above is provided for the less likely case of an angular misalignment between the balls30and the tracks38upon transition of the balls which may cause a less smooth transition, where the bias of the spring40is provided to gently force the sleeve into alignment with the balls thereby avoiding any noticeable jamming. Returning toFIG.1, it may be noted that the ring gear (or gear rim)22of the planetary gear mechanism is secured in the housing10at least partly by means of a torque transducer (not shown) such that measurements of the torque transferred may be provided. This will be described in greater detail in the following as the functionality of the inventive transmission will be explained. In operation, the input shaft20is connected to an electrical motor, and the output shaft15is coupled to a screw joint to be tightened via a nut socket. The functionality of the transmission and hence the power tool is achieved by the transmission selectively providing a connection between the driving member26and the driven member27, either bypassing- or via the planetary gear mechanism depending on the torque level. When the tightening operation starts, the motor starts delivering a torque through the transmission. In a first stage, as the gear shift mechanism19occupies a high speed/low torque drive mode, the balls30are seated in the recesses39of the driving member26and the torque delivered to the driving member26via input shaft20is transferred via the recesses35, the balls30and the grooves36to the driven member27, i.e. directly from the driving member26to the driven member27without any influence by the planetary reduction gear18. The planet wheel carrier24rotates freely in the housing10. As the torque resistance in the screw joint increases, the first axially acting cam elements35apply increasing axial forces upon the balls30, and when a predetermined torque level is reached this force supersedes the biasing force of spring31and the balls30will start moving axially through the grooves36, where eventually the cam means36bwill apply an auxiliary axial force on the balls30as well, again eventually superseding the force of the spring31and thus allowing the balls30to complete their axial movement and occupy their second position. Examples of such cam means36binclude sloping side or diverging portions of the respective grooves36. Now, the gear shift mechanism19has brought the transmission into its high torque/low speed drive mode. This drive mode is maintained as long as the transferred torque is high enough to make the action of the second cam means36bdominate over the biasing force of spring31. When the torque has decreased to that level, i.e. when the predetermined drive mode shifting point is reached, the force exerted by cam means36bwill no longer dominate over the spring force, and the balls30are shifted back to their first position. In order to facilitate this intercoupling, more particularly to facilitate the gear change, the power tool as mentioned above comprises a sensor (not shown), in this case a torque transducer, and further a control unit (not shown) operative to receive the sensed data from the torque transducer and control the rotational speed of the motor accordingly. More particularly, as the measured torque value approaches the predetermined threshold torque value, i.e. the value at which a gear change is to take place, the control unit reduces the rotational speed of the motor. As in the illustrated embodiment, the torque transducer is arranged between the housing10and the gear rim22, it follows that the transducer is only active (i.e. gives meaningful readings) in the second drive mode, i.e. high torque/low speed drive mode, when torque is actually directed over the ring gear the procedure described above using the data from the transducer to control the speed is hence only relevant when determining when to switch from the high torque/low speed drive mode to the low torque/high speed drive mode. As the transmission operates in the low torque/high speed drive mode, the control unit instead monitors the motor current by means of a suitable circuit arrangement (not shown) in order to determine that the torque is approaching the threshold value and that the rotational speed therefore should be decreased in order to facilitate the gear change. As an additional functionality, the notion that the torque transducer start delivering torque data may be used by the control unit to confirm that the transmission has switched to and is operating in the high torque/low speed drive mode. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiment. The skilled person understands that many modifications, variations and alterations are conceivable within the scope as defined in the appended claims. Additionally, variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, form a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope of the claims. | 11,648 |
11858106 | DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION FIGS.1-7illustrate an exemplary embodiment of a power tool system. The power tool system includes a variety of tools that utilize a common battery pack100. The battery pack100is a removable power tool battery pack and may be of the type shown in, for example, U.S. Pat. Nos. 7,598,705; 7,661,486; or U.S. Patent Application Publication No. 2018/0331335. U.S. Pat. Nos. 7,598,705; 7,661,486; and U.S. Patent Application Publication No. 2018/0331335 are hereby incorporated by reference. Each tool in the power tool system of the exemplary embodiment also includes a common base200. Each tool then is built incorporating this common base200. For example, the specific tools may include a rotary tool400, as shown inFIG.2; a hot glue gun500, as shown inFIG.3; a soldering iron600, as shown inFIG.4; a lamp light700, as shown inFIG.5; or a fan800, as shown inFIG.6. Each of these tools is also powered by a battery pack100. FIG.1illustrates components of the base200.FIG.1is a partially exploded view and one half of a housing201of the common base200is removed to illustrate internals. As is shown inFIG.1, the common base200comprises a housing201. The housing201is removably attachable to the power tool battery pack100. There is an actuator205on the front of the housing201. The actuator205may be used to turn on or off one of the variety of tools. For example, the actuator205may be used to turn on or off the rotary tool400, lamp light700, fan800, etc. In the exemplary embodiment, the actuator205is a rotatable knob, but other actuators are possible. The actuator205may provide a variable input. For example, turning the knob more may increase the speed or intensity of the tool. For example, the knob may be used to control the fan800rotation or rotary tool400rotation at different speeds. Similarly, the knob may control the intensity of the light for the lamp light700. In other embodiments the actuator205could simply turn a tool on and off. Additionally, the actuator205may in some tools serve only as an on and off actuator and in other tools provide different settings, such as different speed or intensity settings. The housing201also houses a circuit board220including a controller221. The controller221can provide a proportional 100 W (watts) PWM (pulse-width-modulation) power delivery system that allows for the setting of speed, temperature control, lighting control or fan speed, depending upon the particular tool. The proportional pulse-width-modulation power delivery may be in the range of 80 W to 120 W. pulse-width-modulation power delivery may be in the range of 50 W to 150 W. The circuit board220may be a printed circuit board. The controller221may include a microprocessor. There may be a memory222on the circuit board220and the controller221may itself have a memory component. Other components may also be mounted on the circuit board220such as sensors, resistors or charge and discharge controls. The circuit board220, controller221, memory222and the provided pulse-width-modulation power delivery may be the same for each of the tools. That allows the same design to be used for a variety of different tools400,500,600,700,800. Each of the tools400,500,600,700,800has a base that is similar to the common base200. As is further shown inFIG.1, the housing201includes a recess210. When used for some tools, the recess can hold a stamped aluminum tray250for holding a solder sponge or other component. Additionally, the housing201has a pivot projection202which allows pivoting attachment to various tools. The pivot projection202attaches to tool arm rest300. The tool arm rest300may be used to hold a variety of different tools. In other embodiments, the pivot projection202may attach to an alternate tool holder350. In other embodiments there are additional tool specific features, such as a tool rest320. As shown inFIGS.2-6, the battery pack100serves as a weighted base for each of the tools400,500,600,700,800. The battery pack100may be placed on a stable surface, such as a flat horizontal surface made by a table or floor and remain in place. The tools400,500,600,700,800each have working portions which can be remote from the battery pack100and the respective bases and base housings. The working portions of the tools may be connected to the base, base housings and battery packs100by a connection section. The connection section may in some instances be a cord, as in the rotary tool400, glue gun500and soldering iron600. In other instances the connection section may be a movable stand, such as with the light700and the fan800. In each instance, the common control hardware and a similar housing can be used to connect to the battery pack100serving as a weighted base of the product. At the same time, each tool can perform work away from the base and battery pack100owing to the connection section. As is shown inFIG.1, the battery pack100is connected to the base200in a direction A. The PCB220is disposed in a plane parallel to the direction A. The PCB220is also disposed in a plane parallel to a bottom of the battery pack100and a central major plane of the battery pack100. FIG.2illustrates the rotary tool400. The rotary tool400is powered by the removable battery pack100. The rotary tool400has a base450with a base housing451. The base housing451houses the same components as the base housing201shown inFIG.1. In particular, the base housing451houses a printed circuit board (PCB)220on which a controller221and a memory222are mounted. The user operated actuator455is connected to the controller221through the PCB220. As shown inFIG.2, the rotary tool400is connected to the base housing451by a cord401which carries power to the rotary tool400. The cord401allows the rotary tool400to be used and positioned remote from the base housing451and in a variety of orientations. FIG.3illustrates the glue gun500. The glue gun500is also powered by the removable battery pack100. The glue gun500has a base550with a base housing551. The base housing551houses the same components as the base housing201shown inFIG.1. In particular, the base housing551houses a printed circuit board (PCB)220on which a controller221and a memory222are mounted. The user operated actuator555is connected to the controller221through the PCB220. In this case, the user operated actuator555is on the glue gun housing. As shown inFIG.3, the glue gun500is connected to the base housing551by a cord501which carries power to the rotary tool500. The cord501allows the glue gun500to be used and positioned remote from the base housing551and in a variety of orientations. The glue gun500may rest on tool rest520. FIG.4illustrates the soldering iron600. The soldering iron600is powered by the removable battery pack100. The soldering iron600has a base650with a base housing651. The base housing651houses the same components as the base housing201shown inFIG.1. In particular, the base housing651houses a printed circuit board (PCB)220on which a controller221and a memory222are mounted. The user operated actuator655is connected to the controller221through the PCB220. In this case, the user operated actuator655controls the temperature of the soldering iron600. As shown inFIG.4, the soldering iron600is connected to the base housing651by a cord601which carries power to the soldering iron600. The housing651includes a recess for a sponge653. The cord601allows the soldering iron600to be used and positioned remote from the base housing651and in a variety of orientations. The soldering iron600is shown inFIG.4resting in a tool rest620. The tool rest620is pivotably attached to the base housing651. FIG.5illustrates the lamp light700. The lamp light700is powered by the removable battery pack100. The lamp light700has a base750with a base housing751. The base housing751houses the same components as the base housing201shown inFIG.1. In particular, the base housing751houses a printed circuit board (PCB)220on which a controller221and a memory222are mounted. The user operated actuator755is connected to the controller221through the PCB220. The user operated actuator755can turn the light on and off and adjust the brightness of the light700. As shown inFIG.5, the lamp light700includes a collapsible stand701for changing the orientation of the light given off by the lamp light700. FIG.6illustrates the fan800. The fan800is powered by the removable battery pack100. The fan800has a base850with a base housing851. The base housing851houses the same components as the base housing201shown inFIG.1. In particular, the base housing851houses a printed circuit board (PCB)220on which a controller221and a memory222are mounted. The user operated actuator855is connected to the controller221through the PCB220. The user operated actuator855can turn the fan800on and off and control the speed of rotation of the fan800. As shown inFIG.6, the fan800includes a pivoting stand801for changing the orientation of the fan800. The pivoting stand801includes two pivots802and803. FIG.7is a schematic illustration of the battery pack100, base450,550,650,750,850and the various tools400,500,600,700,800. As shown, the battery pack100is connected to the controller221. As discussed above, the controller221provides a PWM power delivery to the tools400,500,600,700,800, such as a tool specific component900. The tool specific component900varies depending on the particular tool. For example, the tool specific component900may be a motor in the event of the rotary tool400and the fan800. The motor would be driven to drive the rotary tool400or the fan blades of the fan800. In the case of the lamp light700, the tool specific component900may be one or more light-emitting-diodes (LEDs). The LEDs are driven by the power supplied by the controller221from the battery pack100to produce the light of the lamp light700. For the glue gun500and the soldering iron600, the tool specific component900may be a heating element such as a resistive heating element. As will be appreciated, the heating element will provide the heat for melting the glue for the glue gun500or allowing soldering by the soldering iron600. In addition to the tools shown inFIGS.2-6, other small detail tools may be part of the system. For example, small die grinders, chisels, polishers and reciprocating knives. As will be appreciated, the removable battery pack100is a power tool battery pack. Accordingly, the battery pack100may also power other power tools including larger tools such as a drill, impact driver, circular saw, etc. which may not share the common base, electronics and connection section as the embodiments of the present application. Although the present technology has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the technology is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present technology contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment. Additionally, while the exemplary embodiment is described with respect to an oscillating tool, the methods and configurations may also apply to or encompass other power tools such as other tools holding accessories. | 11,570 |
11858107 | DETAILED DESCRIPTION OF THE DRAWINGS FIG.1shows a system20having a hand-portable working apparatus as shown inFIGS.1to11and an apparatus securing element5as shown inFIGS.1,8and9, in particular, and a climbing harness50as shown inFIG.1. In particular, the apparatus securing element5is arranged on, in particular fastened to, the climbing harness50. The working apparatus1has a working device2, at least one handle3and a transport eyelet4. The handle3is at least partially, in particular completely, arranged behind the working device2along an, in particular horizontal, longitudinal axis x of the working apparatus1. At least the handle3defines a handle opening3O. The handle opening3O extends approximately, in particular precisely, along the longitudinal axis x. InFIGS.1to7, the transport eyelet4is arranged approximately, in particular precisely, at a front end3OV of the handle opening3O along the longitudinal axis x. InFIGS.10and11, the transport eyelet4is arranged before the front end3OV of the handle opening3O along the longitudinal axis x. At least the transport eyelet4defines an eyelet opening4O. In an attaching and/or detaching position EAS, as shown inFIGS.1to4and8to10, of the transport eyelet4, an eyelet opening normal4ON of the eyelet opening4O is oriented approximately, precisely inFIG.10, along an, in particular vertical, axis z of the working apparatus1. In detail, the handle opening3O extends approximately, in particular precisely, along the vertical axis z. In the exemplary embodiment shown, the transport eyelet4is arranged above an upper end3OO of the handle opening3O along the vertical axis z. In alternative exemplary embodiments, the transport eyelet can be arranged approximately, in particular precisely, at the upper end of the handle opening along the vertical axis. Furthermore, the working device2extends, in particular forward, along the longitudinal axis x. InFIGS.1to7, the working apparatus1is a saw1′, in particular a chain saw1″, in particular a top-handle saw1′″. Additionally or alternatively, the working device2has a saw rail2′ and a saw chain2″, in particular is a saw rail2′ and a saw chain2″. InFIGS.10and11, the working apparatus1is a leaf blower1″″. Additionally or alternatively, the working device2has a blowing tube2′″, in particular is a blowing apparatus2′″. In addition, the transport eyelet4is arranged to the side of the handle opening3O along an, in particular horizontal, transverse axis y of the working apparatus1. InFIGS.1to7or in the case of the saw1′, the transport eyelet4or the eyelet opening4O is arranged on the right of the handle opening3O. InFIGS.10and11or in the case of the leaf blower1″″, the transport eyelet4or the eyelet opening4O is arranged on the left of the handle opening3O. In particular, the working apparatus1inFIGS.10and11or the leaf blower1″″ has only a single handle3. In detail, the working apparatus1, in particular inFIGS.1to7or the saw1′, has a further handle6. At least the further handle6defines a further handle opening6O. The further handle6is arranged opposite the transport eyelet4to the side, in particular on the left, of the handle opening3O along the transverse axis y. In particular, the further handle opening6O extends approximately, in particular precisely, along the transverse axis y, in particular and approximately along the vertical axis z. In the exemplary embodiment which is shown, the further handle6is arranged approximately, in particular precisely, at the front end3OV of the handle3O along the longitudinal axis x. In alternative exemplary embodiments, the further handle can be arranged before the front end of the handle opening along the longitudinal axis. Furthermore, the handle3with the front end3OV of the handle opening3O, the transport eyelet4and/or the further handle6are/is arranged approximately, in particular precisely, at a center point of gravity G of the working apparatus1along the longitudinal axis x, as shown inFIGS.1and7and for the saw1′. In addition, the handle3, the transport eyelet4and/or the further handle6are/is arranged above the center of gravity G of the working apparatus1along the vertical axis z, as shown inFIGS.1to7,10and11. Furthermore, the working apparatus1has an apparatus housing7. The handle3, the transport eyelet4and/or the further handle6are/is arranged at the top of the apparatus housing7along the vertical axis z, as shown inFIGS.1to7,10and11. In addition, the transport eyelet4is at least partially, in particular completely, adjustable, in particular rotatable, between the attaching and/or detaching position EAS and a stowing position VS which is different from the attaching and/or detaching position EAS, as shown inFIGS.5to7and11. In particular, the transport eyelet4protrudes from the handle3in the attaching and/or detaching position EAS. Additionally or alternatively, the transport eyelet4lies against the handle3in the stowing position VS. Furthermore additionally or alternatively, the eyelet opening normal4ON is oriented approximately, in particular precisely, along the transverse axis y of the working apparatus1in the stowing position VS. In detail, the handle3is arranged for carrying by one hand100and the transport eyelet4is arranged for being simultaneously adjusted by the same hand100, as shown inFIG.5. In particular, the transport eyelet4is arranged on, in particular fastened to, the handle3. Furthermore, the working apparatus1has at least one locking member8. The at least one locking member8is designed to fix, in particular fixes, the transport eyelet4in the attaching and/or detaching position EAS and/or in the stowing position VS. In particular, the working apparatus1has a locking member8in the form of a stop and a locking member8in the form of a spring for fixing the transport eyelet4in the attaching and/or detaching position EAS by a form fit and a force fit. Additionally or alternatively, the working apparatus1has a locking member8in the form of a projection for fixing the transport eyelet4in the stowing position VS by a form fit. In addition, a portion4aof an outer contour4A of the transport eyelet4has a depression4T for opening a snapper9of the apparatus securing element5having a snap hook10for hooking the transport eyelet4in the snap hook10, as shown inFIG.8. In particular, the portion4aof the outer contour4A faces away from the handle3, in particular and is remote therefrom, in the attaching and/or detaching position EAS. Furthermore, a portion4bof an inner contour4I of the transport eyelet4is shaped for opening the snapper9of the apparatus securing element5having the snap hook10for unhooking the transport eyelet4from the snap hook10, as shown inFIG.9. In particular, the portion4bof the inner contour4lfaces away from the handle3, in particular and is close thereto, in the attaching and/or detaching position EAS. In particular, the transport eyelet4or the contour thereof is heart-shaped. Once again in other words: a handle opening normal or orthogonal of the handle opening3O points or is oriented approximately or substantially, in particular precisely, along or parallel to or on the transverse axis or in the transverse direction y of the working apparatus1. In particular, the handle3or the handle opening3O defines a handle opening plane, in particular lies in a handle opening plane, the handle opening normal being normal or orthogonal to the handle opening plane. Additionally or alternatively, in the attaching and/or detaching position EAS of the transport eyelet4, the eyelet opening4O extends approximately or substantially, in particular precisely, along or parallel to the longitudinal axis x and/or the transverse axis y. Furthermore additionally or alternatively, in the stowing position VS of the transport eyelet4, the eyelet opening4O extends approximately or substantially, in particular precisely, along or parallel to the longitudinal axis x and/or the vertical axis z. Furthermore additionally or alternatively, a further handle opening normal or orthogonal of the further handle opening6O points or is oriented approximately or substantially, in particular precisely, along or parallel to or on the longitudinal axis or in the longitudinal direction x of the working apparatus1. In particular, the further handle6or the further handle opening6O defines a further handle opening plane, in particular lies in a further handle opening plane, wherein the further handle opening normal is normal or orthogonal to the further handle opening plane. Moreover, what has been stated previously permits an orientation of the working apparatus1, in particular in the form of the saw1′, and/or the working device2, in particular in the form of the saw rail2′ and the saw chain2″, to the rear at a user or as shown inFIG.1when the transport eyelet4is attached or mounted in the apparatus securing element5, in particular having the snap hook10, arranged on, in particular fastened to, the climbing harness50, in particular arranged on, in particular fastened to, the user, in particular by means of the force of gravity or gravitational force. This therefore makes it possible to avoid or even to prevent injury to the user by the working apparatus1, in particular by its working device2, in particular when transporting the working apparatus1. As the exemplary embodiments which are shown and have been described above make clear, the invention provides an advantageous hand-portable working apparatus which has improved properties and is therefore more user-friendly, and an advantageous system having such a working apparatus. | 9,609 |
11858108 | DETAILED DESCRIPTION FIG.1shows an electrical device which is configured as a hand-held power tool300. According to the represented specific embodiment, hand-held power tool300is mechanically and electrically connectable to rechargeable battery pack100for battery-supplied power. InFIG.1, hand-held power tool300is configured as a cordless combi drill, by way of example. It is pointed out, however, that the present invention is not restricted to cordless combi drills, but rather may be utilized with different hand-held power tools300which are operated with the aid of a rechargeable battery pack100. Hand-held power tool300includes a base body305, on which a tool holder310is fastened, and includes a handle315which includes an interface380at which a corresponding interface180of a rechargeable battery pack100according to the present invention is situated, in the locked position in this case. Rechargeable battery pack100is configured as a sliding rechargeable battery pack. During the mounting of rechargeable battery pack100on hand-held power tool300, receiving arrangement provided on hand-held power tool300, for example, guide grooves and guide ribs, are brought into engagement with corresponding guide elements150of rechargeable battery pack100, rechargeable battery pack100being inserted in a sliding direction y along the receiving arrangement of handle315and rechargeable battery pack100is pushed along a lower outer surface316of handle315, which is oriented essentially perpendicularly to the longitudinal direction of handle315, into the rechargeable battery pack receptacle of a hand-held power tool300. In the position shown inFIG.1, rechargeable battery pack100is fastened on handle315of hand-held power tool300and is locked with the aid of locking arrangement. The locking arrangement includes a locking element and an actuating element220. By way of the actuation of actuating arrangement220, rechargeable battery pack100may be released from handle315of hand-held power tool300. FIGS.2through5show a rechargeable battery pack100according to the present invention for a hand-held power tool300. This includes a rechargeable battery pack housing110made up of a first housing component120and a second housing component130, the housing accommodating, between first housing component120and second housing component130, at least one battery cell, which may be a plurality of battery cells400, as represented here, which are interconnected in parallel or in series. Battery cells400may be positioned and held in rechargeable battery pack housing110with the aid of a cell holder600for insulating battery cells400with respect to each other. In addition, battery cells400may be provided with an insulating sheathing430, which is known per se from the related art, for the insulation with respect to each other. Cardboard sleeves or plastic sleeves, for example, shrinkable tubing, may be provided as insulating sheathing430. Insulating sheathing430is described further below in conjunction withFIG.6. In the shown embodiment variant rechargeable battery pack100is configured as a sliding rechargeable battery pack. For the releasable mounting of rechargeable battery pack100on a hand-held power tool300or on a charging device, rechargeable battery pack100includes an interface180for the releasable mechanical and electrical connection to a corresponding interface380of hand-held power tool300or a corresponding interface of the charging device. During the mounting of rechargeable battery pack100, receiving arrangement, for example, guide grooves and guide ribs, of hand-held power tool300or of the charging device are brought into engagement with rechargeable battery pack100in order to accommodate the corresponding guide elements of rechargeable battery pack100, rechargeable battery pack100being inserted along the receiving arrangement in a contacting direction y, and interface180of rechargeable battery pack100being pushed into corresponding interface380of hand-held power tool300or the corresponding interface of the charging device. Rechargeable battery pack100may be assigned to hand-held power tool300and/or the charging device via interfaces180,380. In order to lock rechargeable battery pack100on handle315, rechargeable battery pack100is pushed in a sliding direction y along handle315, in particular along a lower outer surface of handle315, which is oriented essentially perpendicularly to the longitudinal direction of handle315. In the position shown inFIG.1, rechargeable battery pack100is locked on handle315with the aid of locking arrangement200. Locking arrangement200include, inter alia, a locking element210, which is indicated only schematically, and an actuating element220. By way of the actuation of actuating element220, rechargeable battery pack100may be released from handle315of hand-held power tool300. After rechargeable battery pack100is unlocked, it may be separated from handle315, in particular by sliding rechargeable battery pack100counter to sliding direction y along a lower surface of handle315. During the mounting of rechargeable battery pack100on a hand-held power tool300, locking element210is brought into engagement with a corresponding receptacle—which is not shown in greater detail—in handle315of hand-held power tool300. As is apparent inFIG.3, interface180also includes contact elements140for electrical contacting of rechargeable battery pack100to hand-held power tool300or the charging device. Contact elements143are configured as voltage contact elements and are used as charging and/or discharging contact elements. Contact elements144are configured as signal contact elements and are used for the transmission of signals from rechargeable battery pack100to hand-held power tool300or the charging device and/or from hand-held power tool300or the charging device to rechargeable battery pack100. FIG.4shows a rechargeable battery pack100in an exploded view. In this case, it is clearly apparent that rechargeable battery pack housing110includes a cell holder600which includes a plurality of battery cells400interconnected in a series circuit, second housing component130directly forming cell holder600. Cell holder600simultaneously forms second housing component130. The connection of battery cells400to each other is implemented via cell connectors500. Furthermore, it is apparent that individual battery cells400are accommodated spaced apart from each other in order to be mechanically fixed in cell holder600. Cell holder600is used not only for fixing battery cells400in rechargeable battery pack housing110or in second housing component130, but also for cooling battery cells400and is made up of a thermally conductive material, for example aluminum or a plastic. Moreover, cell holder600includes sleeve-like insulating walls620, so that individual battery cells400are separated and an electrical insulation of individual battery cells400from each other may be ensured. The heat transmission resistance between adjacent battery cells400and between battery cells400and cell holder600may be low in this case, so that the waste heat generated by battery cells400may be well dissipated to the outside and an overheating of rechargeable battery pack100in the interior may be prevented. A circuit board810of a rechargeable battery pack electronics system is fastened on the surface of cell holder600, within rechargeable battery pack housing110. Furthermore, the rechargeable battery pack electronics system includes contact elements140for establishing the electrical and mechanical connection between rechargeable battery pack100and hand-held power tool300or between rechargeable battery pack100and the charging device. The connection between the rechargeable battery pack electronics system and cell holder600is ensured by way of fastening elements which are not shown in greater detail. In the specific embodiment represented inFIG.4, rechargeable battery pack housing110further includes two lateral components125, only one of the two lateral components125being represented inFIG.4. In the assembled state, lateral components125hold first housing component120and second housing component130together in such a way that a detachment of first housing component120from second housing component130, or vice versa, is prevented. Alternative installation and fastening principles of the housing components of rechargeable battery pack housing110are possible. In the specific embodiment represented, it is clearly apparent that cell holder600forms, in areas, an outer side of second housing component130or of rechargeable battery pack100, cell holder600alternatively also being able to form, in areas, an outer side of first housing component120. As is described in greater detail further below, cell holder600essentially completely encompasses lateral surfaces405of battery cells400. In this case, essentially only end faces410of battery cells400are exposed, as is apparent inFIGS.4and5. Lateral components125form an outer side of rechargeable battery pack100in the area of end faces410. Cell holder600includes sleeve-like insulating walls620, between which cylindrical cell openings625for accommodating battery cells400are located. Battery cells400are pressed into cell openings625in such a way that a form-locked and force-fit connection is established between cell holder600and battery cells400. In this way, an electrical insulation of battery cells400with respect to each other is achieved. After battery cells400have been pressed in, cell holder600rests on battery cells400in the area of cell openings625in an essentially gap-free manner. In addition to a secure accommodation of battery cells400in cell holder600, good heat dissipation of the heat generated during the operation of battery pack100away from battery cells400may be achieved in this way. In order to achieve what may be a gap-free fit of battery cells400in cell holder600, a diameter D1 of cell openings625may be selected in such a way that diameter D1 before battery cells400are pressed into cell openings625is between 97% and 99%, in particular between 97.5% and 98.5% of a diameter D2 of corresponding battery cells400. A gap-free fit of battery cells400in cell holder600being achievable, on the one hand, when a diameter D1 is selected for cell openings625in such a way that diameter D1 of cell openings625before battery cells400are pressed into cell openings625is between 0.05 mm and 0.20 mm, in particular between 0.10 mm and 0.15 mm less than a diameter D2 of corresponding battery cells400and, on the other hand, the gap-free fit may be achieved when a circumference of cell openings625before battery cells400are inserted into cell openings625is between 97% to 99.5% of a circumference of the cell casing, which may be between 98% to 99%. In the provided method, elastic and/or plastic material expansions therefore occur in the area of cell holder600. An adequate material for cell carrier600must be selected in order to ensure a damage-free insertion of battery cells400into cell openings625. Cell holder600is made up of a plastic material, alternatively a thermoplastic polymer, a thermosetting plastic, or an elastomer, in particular a polyethylene also being usable. In this case, the polyethylene has a density between 0.90 g/cm3and 1.0 g/cm3, which may be between 0.95 g/cm3and 0.99 g/cm3, particularly between 0.96 g/cm3and 0.98 g/cm3. In order to make the press-fit process to be more efficient and gentler on the material, cell holder600is preheated before the press-fit process to a temperature between 60° C. and 110° C., in particular between 70° C. and 80° C. This has the advantage, on the one hand, that thermal expansions set in, which anticipate a part of the necessary deformations occurring during the press-fit; on the other hand, the deformability of the thermoplastic polymers increases as the temperature increases, which is advantageous for the manufacturing process. The material expansion occurring in cell holder600after battery cells400have been pressed in is between 0.2% and 5%, in particular between 0.5% and 3%, particularly between 1% and 2%. As a result, a sufficiently high cell holding force for fixing battery cells400in cell carrier600is mobilized. This cell holding force between cell holder600and pressed-in battery cells400is between 20 N and 400 N, in particular between 100 N and 300 N, particularly between 150 N and 250 N. Furthermore, cell connectors500are represented inFIG.4, with the aid of which an electrical interconnection of battery cells400to each other in a parallel and/or series circuit may be implemented. FIG.5is a sectional view of rechargeable battery pack100according to the present invention, it also being apparent here that cell holder600forms second housing component130and, therefore, also an outer side of rechargeable battery pack housing110. Moreover, it may be gathered fromFIG.5that lateral surfaces405of two battery cells400situated next to each other in cell holder600do not contact each other, but rather are mechanically and electrically separated from each other by sleeve-like insulating walls620. It is also clear fromFIG.5, as it is fromFIG.4, that cell holder600includes, in the area of cell openings625, stops630which correspond to battery cells400and ensure a desired position of battery cells400in cell holder600. Stops630ensure a desired position of battery cells400in cell holder600along longitudinal axis x of battery cells400. Due to the fact that stops630ensure the position of battery cells400in cell holder600, they make it easier to correctly press the battery cells into the cell holder. FIG.6shows, on the left side, a cylindrical battery cell400including an insulating sheathing430, which is known per se from the related art, and, on the right side, a cylindrical battery cell400without an insulating sheathing430, battery cells400each including a lateral surface405which extends in parallel to a longitudinal axis x and is limited by two end faces410situated perpendicularly to longitudinal axis x. Lateral surface405and end faces410form an outer shell of battery cell400. The electrical poles of battery cells400for the electrical contacting are located on end faces410. The outer shell of battery cells400is made up of an electrically conductive material, in particular a metal, which may be aluminum. Insulating sheathing430essentially completely surrounds at least lateral surface405. End faces410, in particular the poles at end faces410, are exposed in order to allow for the electrical contacting. End faces410, in particular the poles at end faces410, are free of insulating sheathing430. Electrically non-conductive materials, for example, paper, cardboard, and plastic, are suitable for use as insulating sheathing430. Insulating sheathing430forms, in particular, a thin sleeve which rests closely on lateral surface405. In addition to the described and illustrated specific embodiments, further specific embodiments are conceivable, which may include further modifications and combinations of features. | 15,072 |
11858109 | DETAILED DESCRIPTION A preferred exemplary embodiment of a side handle100according to the invention for an electric hand-held power tool200, for example a hammer drill, is illustrated inFIGS.1A and1B. The side handle100has a grip region10designed to be gripped by a user, and a clamping unit20, by means of which the side handle100can be releasably fastened to a machine neck210of the hand-held power tool200. The clamping unit20has an operating element in the form of a clamping lever15, by means of which the clamping unit20is able to be clamped and unclamped, whereinFIG.1Ashows the clamped state SZ andFIG.1Bshows the unclamped state EZ. When the clamping unit20is in the clamped state SZ, the operating element in the form of a clamping lever15is flush or at least substantially flush with the surface of the grip region10. The clamping lever15is arranged on the side handle100so as to be rotatable about an axis of rotation DA relative to the grip region10, wherein the axis of rotation DA extends perpendicularly to a working axis AA of the hand-held power tool200when the side handle100is fastened to the hand-held power tool200. The grip region10can consist of or exhibit for example polypropylene, ABS, polyamide or polyurethane. The clamping lever15can consist of the same material as the grip region10. The clamping lever15can consist of or exhibit a thermoplastic elastomer (TPE). The clamping unit20of the exemplary embodiment inFIGS.1A and1Bhas two clamping bodies21,23in the form of cylinder portions, which are each oriented coaxially with a clamping screw25, extending along the axis of rotation DA, of the clamping unit20. As is readily apparent fromFIG.1B, the clamping bodies21,23are formed in a complementary manner to one another, i.e. they form substantially a cylinder when they rest against one another with minimum length (FIG.1B), wherein the clamping bodies21,23are substantially in full contact along the wave profile W. The wave profile W does not necessarily have to have a straight section profile, but rather, as in the exemplary embodiment inFIG.1, it can have a rudimentary sinusoidal profile. The clamping lever15and the first clamping body21can be formed integrally with one another. As is apparent fromFIGS.1A and1B, the clamping unit20also has a band clamp holder28and a band clamp29provided for encircling the machine neck210. The band clamp holder28has two mutually coaxial holding bodies31,33, to which the band clamp29is fastened. Provided both between the second clamping body23and the first holding body31and between the second holding body and a grip end piece11is a crown toothing KR, which, when the clamping unit is in the clamped state SZ as shown inFIG.1A, prevents the grip region10from rotating about the axis of rotation DA. In the clamped state SZ, the clamping bodies21,23rest with maximum length on one another—rotated through 180 degrees about the axis of rotation with respect to one another with regard toFIG.1B. In other words, a rotation of the clamping lever through 180 degrees is sufficient to clamp and unclamp the clamping unit20. Other wave profiles are possible. A second preferred exemplary embodiment of a side handle100according to the invention is illustrated inFIGS.2A and2B. Here,FIG.2Ashows a plan view coaxially with the working axis AA andFIG.2Bshows a section through the axis of rotation DA. When the clamping unit20is in the clamped state SZ, the operating element in the form of a clamping lever15is flush or at least substantially flush with the surface of the grip region10. BothFIG.2AandFIG.2Bshow the clamped state SZ. In addition to the clamping lever15shown inFIGS.1A and1B, a stop27is formed on the clamping lever15of the exemplary embodiment inFIG.2A, said stop27being formed in a complementary manner to a counterpart stop17provided on the grip region10(see, e.g.,FIG.2B). The stop27is formed by a recess at the outer end of the clamping lever15, this being readily apparent for example inFIG.4A. The counterpart stop17formed in a complementary manner is formed in one piece with the grip region10. As a result of the stop27, it is possible to turn out the clamping lever only in one direction about the axis of rotation (out of the plane of the figure inFIGS.2A and2B). At the same time, rotation of the clamping lever15about the axis of rotation DA is limited mechanically to 360 degrees, in this case for example to less than 360 degrees, by the stop27. The clamping unit20has two clamping bodies21,23in the form of cylinder portions, which are each oriented coaxially with a clamping screw25, extending along the axis of rotation DA, of the clamping unit20. In this case, the first clamping body21is located entirely within a volume V of the side handle, whereas the second clamping body23—at least partially in the clamped state SZ, projects beyond the volume V along the axis of rotation DA. The clamping unit20of the side handle inFIGS.2A and2Balso has a band clamp holder28and a band clamp29A,29B provided for encircling the machine neck210, wherein the band clamp29A,29B is mounted in the band clamp holder28and is designed preferably without a clamping screw25passing through the band. This is readily apparent fromFIG.2B. As a result of it being possible to mount the band clamps29A,29B and in particular the lack of any through-bores intended for a clamping screw25, the band clamps29A,29B—with the side handle200otherwise fully assembled—can be replaced relatively easily. The band clamp holder28inFIGS.2A and2Bis advantageously configured both a band clamp29A with a small diameter D1and a band clamp29B with a large diameter D2. InFIGS.2A and2B2, both band clamps29A,29B are illustrated at the same time, in order to make it clear that both band clamps29A,29B are mountable in one and the same band clamp holder28, to be more precise in a holding lug26of the band clamp holder28. In use, only one of the band clamps29A,29B is installed, of course. The band clamp holder28has a depression24, recessed in a radial direction RR, with a holding lug26, this being readily apparent from the detail illustration inFIG.3B. The holding lug26is configured such that, instead of the mounted band clamp29A with the first diameter D1, the second band clamp29B with the second diameter D2, different than the first diameter, is able to be mounted. For this purpose, the holding lug26has a spring contact face22(cf.FIGS.3A and3B), the curvature K1of which is greater than a curvature K2of a clamping face30, in contact with the machine neck210, of the band clamp holder28. It is readily apparent fromFIG.3Ahow the band clamp29B with the large diameter closely follows the spring contact face22of the holding lug26. Back toFIGS.2A and2B, it is readily apparent that the band clamp holder28has two mutually coaxial holding bodies31,33. A first of the holding bodies31is able to be clamped together with one of the clamping bodies23in the form of a cylinder portion (which projects at least partially beyond the volume V along the axis of rotation DA in the clamped state SZ) via a cone/hollow cone pairing35(cf. alsoFIG.3A). In the same way, such a cone/hollow cone pairing35can be provided between the second holding body33and a grip end piece11. In the exemplary embodiment inFIGS.2A and2B, the grip end piece11is formed in one piece with the handle10. Alternatively, the grip end piece11and the handle can be mutually separate parts. The cone/hollow cone pairing35, to which a cone35A formed on the second clamping body23and a hollow cone35B formed on the first holding body31are assigned, will now be explained in more detail with reference toFIG.3A. As a result of the cone/hollow cone pairing35, it is possible to pivot the handle10about the axis of rotation DA in a stepless manner. This is in contrast to a crown toothing KR provided in the exemplary embodiment inFIG.1. As a result of the cone/hollow cone pairing35, wear-independent clamping of the clamping unit20is possible, since the force-fitting cone/hollow cone pairing35automatically adjusts itself in the direction of the axis of rotation DA in the event of any wear. As an alternative to a stepless configuration of the cone/hollow cone pairing35, the cone35A and/or the hollow cone36B can be chamfered. Thus, for example the hollow cone35B that is readily apparent inFIG.4Chas eight cone flanks35F. FIG.3Bshows a holding body31,33as part of the band clamp holder28. The depression24formed therein the function of the holding lug26have already been explained above. The holding body31,33has, on a side facing the machine neck210, at least one surface portion OF that, compared with the remaining part VT of the band clamp holder28, has a higher coefficient of friction. Thus, the holding body31,33can consist for example substantially of a comparatively hard ABS plastic and the surface portion OF of a comparatively soft thermoplastic elastomer or some other rubberlike material. This is in order to compensate for any tolerances between the machine neck and the band clamp holder and/or to increase the friction between these components. Alternatively, a first and second cutout can be provided in the band clamp29at the ends. The first cutout is used to position a first holding element and the second cutout is used to position a second holding element. Both the first and the second holding element are connected to the band clamp29by a clip mechanism. Furthermore, each holding element is manufactured from acrylonitrile-butadiene rubber. Acrylonitrile-butadiene rubber can also be referred to as nitrile rubber (AB) or nitrile butadiene rubber (NBR). Finally,FIGS.4A,4B and4Cshow detail views of the side handle100, wherein the clamping lever15is in each case in different rotational positions.FIG.4Ashows the clamped state SZ, in which the clamping lever has been deflected through 0 degrees and thus the stop27formed on the clamping lever rests against the complementary counterpart stop17formed integrally with the handle10. The second clamping body23and the first holding body31(and the second holding body33and the grip end piece11, which cannot be seen here) are pressed against one another, such that the grip region10cannot be rotated about the axis of rotation DA. FIG.4Bshows the unclamped state SZ, in which the clamping lever has been deflected through 180 degrees. The second clamping body23is spaced apart from the first holding body along the axis of rotation DA. The grip region10can be rotated or pivoted about the axis of rotation DA as desired. At the same time, the entire side handle can be rotated or pivoted about the working axis AA of a hand-held power tool that is not shown here. Equally, the entire side handle cannot be rotated or pivoted about the working axis AA of a hand-held power tool that is not shown here. Finally,FIG.4Cshows the unclamped state SZ, in which the clamping lever has been deflected through 270 degrees. The second clamping body23is spaced further apart from the first holding body along the axis of rotation DA than inFIG.4B. In the state shown inFIG.4C, the side handle can be removed from the machine neck, which is not shown here. LIST OF REFERENCE SIGNS 10Grip region11Grip end piece15Clamping lever17Counterpart stop20Clamping unit21First clamping body22Spring contact face23Second clamping body24Depression25Clamping screw26Holding lug27Stop28Band clamp holder29Band clamp29A Band clamp with a small diameter29B Band clamp with a large diameter30Clamping face31First holding body33Second holding body35Cone/hollow cone pairing35A Cone35B Hollow cone35F Cone flanks100Side handle200Electric hand-held power tool210Machine neckAA Working axisDA Axis of rotationD1First diameterD2Second diameterEZ Unclamped stateKR Crown toothingK1First curvatureK2Second curvatureOF Surface portionRR Radial directionSZ Clamped stateVT Remaining partV VolumeW Wave | 11,890 |
11858110 | Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. DETAILED DESCRIPTION FIG.1illustrates a powered fastener driver10capable of discharging fasteners (e.g., nails) into a workpiece, such as a concrete floor, wall, or ceiling. In some embodiments, the powered fastener driver10may be configured as a single-shot powered fastener driver capable of discharging individual fasteners, one at a time, as they are manually loaded into the fastener driver after each driving cycle. In other embodiments, the powered fastener driver10may be configured as a multi-shot powered nailer including a magazine holding a collated fastener strip, which does not require the user to manually reload the fastener driver after each driving cycle. With reference toFIGS.1-3and10-13, the powered fastener driver10includes a handle unit H and a driver unit N selectively and removably attachable to the handle unit H. A handle extension E is selectively attachable intermediate the handle unit H and the driver unit N to permit the user to lift the powered fastener driver10even higher toward an elevated work surface (e.g., a concrete ceiling) than without the handle extension E. The powered fastener driver10is operable in a first configuration with the handle unit H directly attached to the driver unit N (FIG.1). Alternatively, the powered fastener driver10is operable in a second configuration with the handle extension E coupled between the handle unit H and the driver unit N (FIG.2). In the illustrated embodiment, the powered fastener driver10is a gas-spring powered nailer10. With reference toFIG.4, the driver unit N includes a housing12(FIG.3) containing therein a motor (e.g., a brushless direct current electric motor) that supplies a motive force to operate the powered nailer10. The housing12further contains a cylinder16, a drive piston18located within the cylinder16for reciprocal movement therein, and a drive blade20attached to the drive piston18. The powered nailer10also includes a gas spring22(i.e., a fixed quantity of compressed gas, such as nitrogen) within the cylinder16which, during a fastener driving operation, expands within the cylinder16to displace the drive piston18and the drive blade20toward a workpiece or work surface to drive the fastener out a nosepiece24and into the workpiece or work surface. The powered nailer10also includes a lifter mechanism26coupled between the motor and the drive piston18. The lifter mechanism26returns the drive piston18and drive blade20toward a top-dead-center position within the cylinder16(shown inFIG.4), which compresses the gas spring22for a subsequent fastener driving operation. Referring back toFIG.3, the driver unit N further includes a first coupler28(also shown inFIG.10) selectively engageable with corresponding second and third couplers30,32(also shown inFIGS.10and11) located on each of the handle unit N and the handle extension E, respectively. When engaged, the first and second couplers28,30mechanically and electrically couple the driver unit N directly to the handle unit H. Likewise, the first and third couplers28,32engage to mechanically and electrically couple the driver unit N directly to the handle extension E. The first coupler28includes, in some embodiments, a pair of rails (not shown) engageable with corresponding grooves located on each of the corresponding second and third couplers30,32to form a tongue and groove mechanical coupling therebetween. The first coupler28further includes electrical contacts (e.g., male and/or female blade terminals) engageable with corresponding electrical contacts located on each of the corresponding second and third couplers30,32to form electrical connections therebetween. A latch mechanism may be provided on one or more of the first, second, and third couplers28,30,32to selectively secure and release the connections formed between the couplers28,30,32. The handle unit H includes a handle body34having the second coupler30, a gripping portion36for grasping by a user, a trigger38, and a battery receptacle40(also shown inFIG.10) for receiving a battery pack42. In the illustrated embodiment of the powered nailer10, the motor receives power from the battery pack42. The battery pack42may include any of a number of different nominal voltages (e.g., 12V, 18V, etc.), and may be configured having any of a number of different chemistries (e.g., lithium-ion, nickel-cadmium, etc.). Alternatively, the motor may be powered by a remote power source (e.g., a household electrical outlet) through a power cord extending from the handle unit H. In some embodiments, the handle unit H further includes a control unit (e.g., a printed circuit board assembly (PCBA)) for sending and receiving control signals between the battery pack42, the trigger38, the motor, and sensors associated with the powered nailer10. The handle extension E is selectively attachable to the driver unit N at a first end44, and is selectively attachable to the handle unit H at a second end46opposite the first end44. An elongated shaft segment48extends along a shaft axis50between the first and second ends44,46. In the illustrated embodiment, the shaft segment48extends along a fixed length to locate the nosepiece24near an elevated workpiece or work surface. In other embodiments (not shown), the shaft segment48may be adjustable (e.g., by telescoping) between a retracted position in which the nosepiece24can be located proximate and/or in contact with a relatively low elevated workpiece or work surface, and an extended position in which the nosepiece24can be located proximate and/or in contact with a relatively high elevated workpiece or work surface. A saddle52is affixed to the shaft segment48at the first end44and defines a nailer receptacle54for receiving a rear portion of the driver unit N (see alsoFIGS.5and11). The saddle52includes the third coupler32. When the handle extension E is coupled to the driver unit N, the saddle52supports the housing12of the driver unit N to provide a stable and secure connection therebetween. A fourth coupler56(FIG.12), similar in construction to the first, second, and third couplers28,30,32, is affixed to the saddle52via the shaft segment48and selectively engages the second coupler30to mechanically and electrically connect the handle extension E to the handle unit H. Electrical conductors such as wires (not shown) extend within the shaft segment48and electrically connect the electrical contacts of the third and fourth couplers32,56. FIG.6illustrates a free body diagram of the powered nailer10configured with the handle extension E coupled between the driver unit N and the handle unit H. The handle unit H, the handle extension E, the driver unit N, and the battery pack42include respective centers of gravity (CG's)58,60,62,64which, due to the generally linear construction of the powered nailer10, are each located generally in-line with the shaft axis50. An origin66is defined equidistant from first and second gripping locations68,70. The powered nailer10rotates about an axis of rotation that passes through the origin66. Since the handle extension E is coupled between the handle unit H and the nailer unit N, the handle unit and battery pack CG's58,64are located opposite to the handle extension and nailer unit CG's60,62with respect to the origin66. This reduces the overhead weight of the powered nailer10when configured with the handle extension E. Moreover, moments produced about the axis of rotation by a gravitational force acting on the handle unit and battery pack CG's58,64act to offset moments produced about the axis of rotation by the gravitational force acting on the handle extension and nailer unit CG's60,62. This helps to stabilize and balance the powered nailer10when configured with the handle extension E. One challenge is positioning the powered nailer10such that the nail will enter an overhead surface perpendicularly to the worksurface. With reference toFIGS.7and8, in some embodiments, an alignment device includes a digital readout72positioned on the powered nailer10that indicates to a user when the powered nailer10is positioned in an orientation to fire nails perpendicularly to the workpiece (FIG.7). If an orientation of the powered nailer10deviates too far from an acceptable range (e.g., more than +/−6 degrees) relative to a work surface, the digital readout72further indicates to the user a direction toward which the powered nailer10should be tilted to restore perpendicularity (FIG.8). The digital readout72communicates with a positioning sensor (e.g., an accelerometer or gyroscope), which determines how the powered nailer10is oriented based on a sensed gravity vector. In some embodiments, the powered nailer10may be calibrated by the user by manually positioning the powered nailer10in an orientation perpendicular to the work surface, and then initiating a calibration process of the positioning sensor so that the digital readout will subsequently direct the user toward that desired orientation (e.g., for firing nails into angled work surfaces). Another challenge is providing a comfortable means for inserting individual nails into the powered nailer10. With reference toFIGS.9A-9E, in some embodiments, the powered nailer10includes a slot74(FIG.9A) provided in a side of a barrel of the nosepiece24to receive the nail. In other embodiments, the powered nailer10includes a hinged barrel segment76(FIG.9B) that opens to receive a nail. In other embodiments, the powered nailer10includes a spring-loaded door78(FIG.9C) that slides to reveal an opening for receiving the nail. In other embodiments, the powered nailer10includes a bolt mechanism80(FIG.9D) for receiving the nail and positioning the nail for firing. In other embodiments, the powered nailer10includes a harmonica magazine82(FIG.9E) for receiving the nail. In operation, the powered nailer10may be adjusted between the first configuration with the handle unit H coupled to the driver unit N (FIG.1), and in the second configuration further utilizing the handle extension E (FIGS.2and13), depending on the intent of the user. To operate the powered nailer10in the first configuration, the first coupler28of the driver unit N is brought into engagement with the second coupler30of the handle unit H to directly couple the driver unit N to the handle unit H. A nail is loaded into the barrel, and the nosepiece24is positioned adjacent the workpiece or work surface and oriented perpendicular thereto. The user pulls the trigger38, which energizes the motor and causes the nail to be fired into the workpiece or work surface. To operate the powered nailer10in the second configuration, the rear portion of the driver unit N is inserted into the nailer receptacle54defined in the saddle52, and the first coupler28is brought into engagement with the third coupler32to attach the handle extension E to the nailer unit N at the first end44. The second end46of the handle extension E is coupled to the handle unit H by engaging the fourth coupler56with the second coupler30. A nail is loaded into the barrel, and the nosepiece24is positioned adjacent the workpiece or work surface and oriented perpendicular thereto. The user pulls the trigger38, which energizes the motor and causes the nail to be fired into the workpiece or work surface. Since the handle unit H is decoupled from driver unit N in the second configuration, the powered nailer10has a smaller profile and thus can be fit into tighter spaces to reach the work surface (e.g., between ducts, conduits, joists, or other obstructions). FIG.14illustrates an extendable power tool100according to another embodiment of the invention. The power tool100includes the handle unit H and the handle extension E described above with respect to the powered nailer10. The power tool100further includes a tool unit T selectively and removably attachable to each of the handle unit H and the handle extension E. In some embodiments, the tool unit T includes an outdoor tool (e.g., a chain saw, a pole saw, a string trimmer, a hedge trimmer, etc.), a fastening tool (e.g., a drill, an impact driver, a hammer drill, a screw gun, etc.), a cutting tool (e.g., a reciprocating saw, an oscillating tool, a rotary tool, etc.), etc. The power tool100is operable in a first configuration with the handle unit H directly attached to the tool unit T. Alternatively, the power tool100is operable in a second configuration with the handle extension E coupled between the handle unit H and the tool unit T. The tool unit T includes a housing112containing therein a motor (e.g., a brushless direct current electric motor) that supplies a motive force to operate the power tool100. The tool unit T further includes a first coupler128selectively engageable with the corresponding second and third couplers30,32of the handle unit N and the handle extension E, respectively. When engaged, the first and second couplers128,30mechanically and electrically couple the tool unit T directly to the handle unit H in a manner similar to that described above with respect to the powered nailer10. Likewise, the first and third couplers128,32engage to mechanically and electrically couple the tool unit T directly to the handle extension E in a manner similar to that described above with respect to the powered nailer10. FIGS.15-19illustrate another powered nailer200according to another embodiment of the invention. The powered nailer200includes the handle unit H and the extension E described above with respect to the powered nailer10. The powered nailer200further includes a driver unit N2having a front-mounted motor configuration. The driver unit N2is similar to the driver unit N and includes much of the same structure as the driver unit N. Accordingly, the following description focuses primarily on the structure and features that are different from the embodiment described above in connection withFIGS.1-13. Features and elements that are described in connection withFIGS.1-13are numbered in the200series of reference numerals inFIGS.15-19. It should be understood that the features of the powered nailer200that are not explicitly described below have the same properties as the features of the powered nailer10. The powered nailer200is operable in a first configuration with the handle unit H directly attached to the driver unit N2. Alternatively, the powered nailer200is operable in a second configuration with the handle extension E coupled between the handle unit H and the driver unit N2. The driver unit N2includes a housing212containing therein a motor213(FIG.17) (e.g., a brushless direct current electric motor) that supplies a motive force to operate the powered nailer200. The driver unit N2further includes a first coupler228selectively engageable with the corresponding second and third couplers30,32of the handle unit H and the handle extension E, respectively. When engaged, the first and second couplers228,30mechanically and electrically couple the driver unit N2directly to the handle unit H in a manner similar to that described above with respect to the powered nailer10. Likewise, the first and third couplers228,32engage to mechanically and electrically couple the driver unit N2directly to the handle extension E in a manner similar to that described above with respect to the powered nailer10. In the illustrated embodiment, the powered nailer200is a gas-spring powered nailer200. With reference toFIG.19, the housing212further contains a cylinder216, a drive piston218located within the cylinder216for reciprocal movement therein, and a drive blade220attached to the drive piston218. The powered nailer200also includes a gas spring222(i.e., a fixed quantity of compressed gas, such as nitrogen) within the cylinder216which, during a fastener driving operation, expands within the cylinder216to displace the drive piston218and the drive blade220toward a workpiece or work surface to drive the fastener out a nosepiece224and into the workpiece or work surface. The powered nailer200also includes a lifter mechanism226coupled between the motor213and the drive piston218. The lifter mechanism226returns the drive piston218and drive blade220toward a top-dead-center position within the cylinder216(shown inFIG.4), which compresses the gas spring222for a subsequent fastener driving operation. With reference toFIGS.15-17, the housing212includes a motor housing portion284, located adjacent the nosepiece224, in which the motor213and a transmission286are at least partially positioned. Thus, the motor213and the transmission286are located proximate a front end288of the driver unit N2. The transmission286rotatably couples to a motor output shaft (not shown), and includes a transmission output shaft292extending to the lifter mechanism226to move the drive blade220to the top-dead-center position. When the user pulls the trigger38of the handle unit H, the motor213is energized and causes the nail to be fired into the workpiece or work surface. FIG.18is a front view illustrating the driver unit N2with the housing212removed. A width W of the driver unit N2, measured in a lateral or side-to-side direction as shown inFIG.18, is approximately five inches. A height X of the driver unit N2, measured in a vertical or top-to-bottom direction as shown inFIG.18, is somewhat more than five inches (i.e., approximately six inches). The driver unit N2accordingly has a form factor F of approximately five inches by six inches. When obstacles are present near the workpeice (e.g., ducts, pipes, beams, dropped ceiling frames, etc.), the relatively small form factor F of the driver unit N2allows the driver unit N2to fit into spaces measuring approximately five inches by six inches. FIGS.20-24illustrate another powered nailer300according to another embodiment of the invention. The powered nailer300includes the handle unit H and the extension E described above with respect to the powered nailer10. The powered nailer300further includes a driver unit N3having a rear-mounted motor configuration. The driver unit N3is similar to the driver unit N and includes much of the same structure as the driver unit N. Accordingly, the following description focuses primarily on the structure and features that are different from the embodiment described above in connection withFIGS.1-13. Features and elements that are described in connection withFIGS.1-13are numbered in the300series of reference numerals inFIGS.20-24. It should be understood that the features of the powered nailer300that are not explicitly described below have the same properties as the features of the powered nailer10. The powered nailer300is operable in a first configuration with the handle unit H directly attached to the driver unit N3. Alternatively, the powered nailer300is operable in a second configuration with the handle extension E coupled between the handle unit H and the driver unit N3. The driver unit N3includes a housing312containing therein a motor313(e.g., a brushless direct current electric motor) that supplies a motive force to operate the powered nailer300. The driver unit N3further includes a first coupler328selectively engageable with the corresponding second and third couplers30,32of the handle unit H and the handle extension E, respectively. When engaged, the first and second couplers328,30mechanically and electrically couple the driver unit N3directly to the handle unit H in a manner similar to that described above with respect to the powered nailer10. Likewise, the first and third couplers328,32engage to mechanically and electrically couple the driver unit N3directly to the handle extension E in a manner similar to that described above with respect to the powered nailer10. In the illustrated embodiment, the powered nailer300is a gas-spring powered nailer300. With reference toFIG.24, the housing312further contains a cylinder316, a drive piston318located within the cylinder316for reciprocal movement therein, and a drive blade320attached to the drive piston318. The powered nailer300also includes a gas spring322(i.e., a fixed quantity of compressed gas, such as nitrogen) within the cylinder316which, during a fastener driving operation, expands within the cylinder316to displace the drive piston318and the drive blade320toward a workpiece or work surface to drive the fastener out a nosepiece324and into the workpiece or work surface. The powered nailer300also includes a lifter mechanism326coupled between the motor313and the drive piston318. The lifter mechanism326returns the drive piston318and drive blade320toward a top-dead-center position within the cylinder316(shown inFIG.24), which compresses the gas spring322for a subsequent fastener driving operation. With reference toFIGS.21and22, the housing312includes a motor housing portion384, located behind the cylinder316and opposite the nosepiece324, in which the motor313and a transmission386are at least partially positioned. Thus, the nosepiece324is located at a front end388of the driver unit N3, whereas the motor313and the transmission386are located proximate a rear end390of the driver unit N3opposite the front end388. The transmission386rotatably couples to a motor output shaft (not shown), and includes a transmission output shaft392extending to the lifter mechanism326to move the drive blade320to the top-dead-center position. When the user pulls the trigger38of the handle unit H, the motor313is energized and causes the nail to be fired into the workpiece or work surface. FIG.23is a front view illustrating the driver unit N3with the housing312removed. A width W of the driver unit N3, measured in a lateral or side-to-side direction as shown inFIG.23, is approximately five inches. A height X of the driver unit N3, measured in a vertical or top-to-bottom direction as shown inFIG.23, is also approximately five inches. The driver unit N3accordingly has a form factor F of approximately five inches by five inches. The form factor F of the driver unit N3is reduced as compared to that of the driver unit N2described above, due to the rear-mounted motor and transmission configuration. When obstacles are present near the workpiece (e.g., ducts, pipes, beams, dropped ceiling frames, etc.), the relatively small form factor F of the driver unit N3allows the driver unit N3to fit into spaces measuring approximately five inches by five inches. FIG.25illustrates the powered nailer10,200,300arranged in the first configuration with the handle unit H directly attached to the driver unit N. N2, N3, and oriented at an address position relative to a ground plane394. In some embodiments, when thus arranged, the gripping portion36of the handle unit H is located below and aligned with the center of gravity (CG)58of the driver unit N, N2, N3. That is, it will be appreciated that a gravity vector G originating from the CG58of the driver unit N, N2, N3will pass through the gripping portion36as shown inFIG.25when the powered nailer10,200,300is thus arranged. FIG.26illustrates a digital readout372that may be positioned on the powered nailer10,200,300according to another embodiment of the invention. The digital readout372is similar to the digital readout72(FIGS.7and8) described above with respect to the powered nailer10. It will be appreciated that the digital readout372is operable with the alignment device described above with respect to the powered nailer10, and indicates to a user when the nailer10,200,300is positioned in an orientation to fire nails perpendicularly to the workpiece. The digital readout372is provided as a screen395capable of displaying a visual indicator396. If an orientation of the powered nailer10,200,300deviates too far from an acceptable range (e.g., more than +/−6 degrees) relative to a work surface, the digital readout72further indicates to the user a direction toward which the powered nailer10should be tilted to restore perpendicularity. FIGS.27-29illustrate another handle extension E2that can be used in the powered nailers10,200,300, or in the power tool100, in place of the handle extension E. The handle extension E2includes the same features that are described above relative to the handle extension E. and further includes a number of modular pole sections including a handle-end pole section397, a tool-end pole section398attachable to the handle-end pole section397, and one or more intermediate pole sections399attachable intermediate the handle-end and tool-end pole sections397,398. Multiple intermediate pole sections399may be inserted between the handle-end and tool-end pole sections397,398to allow for different lengths of extension of the powered nailers10,200,300, and the power tool100. The pole sections397,398,399are electrically connectable to one another to permit the transmission of electrical power and communication signals between the handle unit H and the driver units N, N2, N3or the tool unit T. With continued reference toFIGS.27-29, in some embodiments, each pole section397,398,399includes at least one of a terminal block391and a terminal block receiver393engageable with other corresponding terminal blocks391. Specifically, in some embodiments, the handle-end pole section397includes a terminal block receiver393, the tool-end pole section398includes a terminal block391, and each intermediate pole section399includes each of a terminal block391and a terminal block receiver393disposed at opposite ends thereof. It will be appreciated that the arrangement may be reversed such that the handle-end pole section397includes a terminal block391and the tool-end pole section398includes a terminal block receiver393. Each terminal block receiver393can receive the terminal block of another pole segment to transmit electrical power to the adjacent pole section, to the handle unit H, or to the driver unit N, N2. N3or the tool unit T. In some embodiments, the pole sections397,398,399can be mechanically and electrically connected to other pole sections, the handle unit H, or the driver unit N, N2, N3or the tool unit T in a single motion. With reference toFIG.30, in some embodiments, each of the powered nailers10,200,300and the power tool100includes an onboard or first on/off switch387provided directly on the driver unit N, N2, N3, or the tool unit T, respectively. In such embodiments, the powered nailers10,200,300and the power tool100also include a remote or second on/off switch389provided on the handle unit H. The powered nailers10,200,300and the power tool100can thus be powered on or powered off locally via the first on/off switch387, or remotely via the second on/off switch389. With reference toFIGS.31and32, in some embodiments of the powered nailer10, the drive blade20(FIG.4) of the driver unit N moves along a drive blade axis21(FIG.31). The gripping portion36of the handle unit H defines a handle axis37that extends orthogonal to the drive blade axis21when the powered nailer10is arranged in the first configuration (i.e., with the handle unit H directly attached to the driver unit N (FIG.1)). To engage the first coupler28with the second coupler30, and thereby connect the handle unit H to the driver unit N, the handle unit H is moved relative to the driver unit N along a mounting direction39indicated by the arrow shown inFIG.31. The mounting direction39is generally parallel to the drive blade axis21. With continued reference toFIG.32, in some embodiments of the powered nailer10, the handle extension E generally extends along a pole axis41. When the powered nailer10is arranged in the second configuration (i.e., with the handle extension E coupled between the handle unit H and the driver unit N (FIG.2)), the handle axis37of the handle unit H extends parallel to the pole axis41and to the drive blade axis21. To engage the first coupler28with the fourth coupler56, and thereby connect the handle unit H to the handle extension E, the handle unit H is moved relative to the handle extension E along the mounting direction39orthogonal to the drive blade axis21(FIG.31). It will be appreciated that the above description relating to the mounting direction39and drive blade axis21applies with equal weight to the powered nailers200,300described above. In some embodiments, the powered nailers10,200,300deliver at least 230 joules of kinetic energy while the gas contained within the spring (e.g., gas spring22) has a pressure of 166 pounds per square inch (psi) or less. In other embodiments, the powered nailers10,200,300deliver at least 200 joules of kinetic energy while the gas contained within the spring (e.g., gas spring22) has a pressure of 175 psi or less. With reference toFIGS.33-35, it will be appreciated that the motor, transmission, battery pack, and handle can be located in different locations relative to the drive blade and the gas spring. Specifically,FIG.33illustrates a prior art powered nailer400selectively attachable to a pole extension P to perform overhead fastening operations. The powered nailer400includes a motor413, a transmission486, a gas spring422, a lifter mechanism426, a battery receptacle440for receiving a battery pack442, and a gripping portion or handle436all locally contained within the powered nailer400.FIG.34illustrates a powered nailer500for performing overhead fastening operations according to another embodiment. The powered nailer500includes a nailer portion515and a handle portion517separated from the nailer portion515by an extension portion519. In the illustrated embodiment, the extension portion519is a pole section519. A motor513and a transmission586are provided adjacent the handle portion517, and the handle portion517includes a battery receptacle540for attaching a battery pack542and a gripping portion536. A gas spring522and lifter mechanism526are provided at the nailer portion515. Power is transmitted from the handle portion517to the nailer portion515via a driveshaft523that extends through the pole section519.FIG.35illustrates the arrangement of components found in the powered nailer300described above. Specifically, the motor313, transmission386, gas spring322, and lifter mechanism326are all contained within the driver unit N3. Meanwhile, the gripping portion336and the battery receptacle340are provided with the handle unit H. The handle extension E connects the handle unit H to the driver unit N3for performing overhead fastening operations. Another challenge is providing a comfortable means for inserting individual nails into the powered nailer10,200,300.FIGS.36A-36Fillustrate various barrel designs which may be provided with the powered nailers10,200, or300. In some embodiments, the powered nailer10,200,300includes a top slot674a(FIG.36A) provided in a top of a barrel of the nosepiece to receive the nail. In other embodiments, the powered nailer10,200,300includes a side slot674b(FIG.36B) provided in a side of a barrel of the nosepiece to receive the nail. In other embodiments, the powered nailer10,200,300includes a side hinge barrel segment676a(FIG.36C) that opens to receive a nail. In other embodiments, the powered nailer10,200,300includes a torsion sleeve685(FIG.36D) that rotates to reveal an opening for receiving the nail. In other embodiments, the powered nailer10,200,300includes a break-barrel676b(FIG.36E) that rotates about a bottom hinge to reveal an opening for receiving the nail. In other embodiments, the powered nailer10,200,300includes a top hinge676c(FIG.36F) about which a door of the barrel rotates to reveal an opening for receiving the nail. In other embodiments, the powered nailer10,200,300includes a side arm676d(FIG.36G) for receiving the nail. In other embodiments, the powered nailer10,200,300includes a harmonica revolver682(FIG.36H) that rotates to reveal an opening for receiving the nail. FIGS.37-38illustrate another powered nailer700according to another embodiment of the invention. The powered nailer700includes a driver unit N4and a handle extension E3selectively and removably attachable to the driver unit N4. Unlike the driver units N, N2, and N3described above, the driver unit N4includes a handle portion717integrally formed with a housing712. That is, the handle portion717is not detachable from the driver unit N4. A trigger738is coupled to the handle portion717and actuable to selectively activate the powered nailer700to drive fasteners into a workpiece. The housing712also defines a first coupler728that is selectively engageable with a removable and rechargeable battery pack742. Additionally, the first coupler728is also selectively engageable with a corresponding second coupler732located on the handle extension E3. When engaged, the first and second couplers728,732mechanically and electrically couple the driver unit N4directly to the handle extension E3. Accordingly, the first coupler728is configured to selectively engage the battery pack742, and the first coupler728is further configured to selectively engage the second coupler732of the handle extension E3in lieu of the battery pack742. The first coupler728is supported at a first end744of the handle extension E3. The handle extension E3also includes a battery receptacle740(for receiving the battery pack742) located at a second end746of the handle extension E3, opposite the first end744. An elongated shaft segment748extends along a shaft axis750between the first and second ends744,746. The shaft segment748includes a gripping portion736for grasping by a user, located adjacent the battery receptacle740. The handle extension E3also includes a remote actuation mechanism701for initiating a fastener driving operation of the powered nailer700when the handle extension E3is coupled to the driver unit N4. The actuation mechanism701includes a remote trigger703coupled to the handle extension E3adjacent the gripping portion736, a lever705coupled to the second coupler732adjacent the trigger738, and a linkage assembly707that operatively couples the remote trigger703to the lever705. To operate the powered nailer700with the handle extension E3, the user pulls the remote trigger703, causing the linkage assembly707and the lever705to communicate motion of the remote trigger703to the trigger738. Actuation of the trigger738causes a fastener to be fired into the workpiece or work surface. The powered nailer700is operable in a first configuration with the driver unit N4directly attached to the battery pack742(FIG.37). Alternatively, the powered fastener driver700is operable in a second configuration with the handle extension E3coupled between the battery pack742and the driver unit N4(FIG.38). In the second configuration, the user grips the gripping portion736and actuates the remote trigger703to initiate a fastener driving operation. Movement of the remote trigger703is communicated to the lever705via the linkage assembly707, so that the lever705presses the trigger738, causing electrical power supplied from the battery pack742to energize the driver unit N4. Various features of the invention are set forth in the following claims. | 35,468 |
11858111 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Please refer toFIGS.1to11for a preferable embodiment of the present invention. A foldable torque tool of the present invention includes a handle1, a driving member2and a blocking structure3. The handle1includes an inner space13, a first opening14and a second opening15which are respectively communicated with the inner space13. An opening direction of the first opening14is lateral to an opening direction of the second opening15, and the first opening14is laterally communicated with the second opening15. The driving member2is configured to drive an object to rotate, and the driving member2is idled when a torque greater than a predetermined torque value is exerted thereon. The driving member2is disposed within the inner space13and is rotatable about an axis of rotation25between a folded position and an unfolded position. An axis24of the driving member2extends in a direction toward the first opening14when the driving member2is in the folded position. In this embodiment, the foldable torque tool which is folded is rod-shaped (as shown inFIG.7), which provides a smaller volume and is portable and easy to storage, and the foldable torque tool can be used as a screwdriver. The driving member2partially protrudes beyond the handle1through the second opening15when the driving member2is in the unfolded position. In this embodiment, the foldable torque tool which is unfolded is T-shaped (as shown inFIG.5) so that the handle1provides a large contact area to be held comfortably and a longer moment arm to generate a large torque. A direction that the driving member2is rotated from the unfolded position to the folded position is defined as a first switch direction41. The blocking structure3includes a first blocking unit31and a second blocking unit32. The first blocking unit31is disposed on an inner wall of the handle1, and the second blocking unit32is disposed on the driving member2. When the driving member2is in the unfolded position, the first blocking unit31is located at a side of the axis of rotation25, and the first blocking unit31is blocked with the second blocking unit32in the first switch direction41so that the driving member2is non-rotatable to the folded position along the first switch direction41. Therefore, the driving member2is stably positioned in the unfolded position, which allows stable operation. In this embodiment, the handle1is made of materials including at least one of plastic and rubber, which provides comfortable grip and anti-slip effect and can be integrally formed with good structural strength. The driving member2is a torque socket, which is commonly used. Preferably, when the driving member2is in the folded position, the driving member2partially protrudes beyond the handle1through the first opening14so that the driving member2can be rotated to the unfolded position by pushing the protruding portion of the driving member2. A direction that the driving member2is rotated from the folded position to the unfolded position is defined as a second switch direction42. The blocking structure3further includes a third blocking unit33and a fourth blocking unit34. The third blocking unit33is disposed on the inner wall of the handle1, and the fourth blocking unit34is disposed on the driving member2. When the driving member2is in the folded position, the third blocking unit33is blocked with the fourth blocking unit34in the second switch direction42so that the driving member2is non-rotatable to the unfolded position along the second switch direction42. Therefore, the driving member2is stably positioned in the folded position. Specifically, the first blocking unit31includes two first elongated blocks311disposed on two opposite sides of the second opening15so as to provide symmetrical support; and the two first elongated blocks311extend linearly from the inner space13toward the second opening15so as to stably block the second blocking unit32in different depth of the inner space13. Similarly, the third blocking unit33includes two second elongated blocks331, and the two second elongated blocks331extend toward the first opening14. In this embodiment, the second blocking unit32and the fourth blocking unit34are respectively a portion of an outer surface of the driving member2. Moreover, the inner wall of the handle1includes two protruding portion16located at two opposite sides of the second opening15, and the two protruding portion16extend toward each other and are located on a rotational path of the driving member2. The driving member2is cylindrical, and a distance between the two protruding portions16is smaller than or equal to a diametrical dimension of the driving member2so that the two protruding portions16is blockable with the driving member2in the first switch direction41or in the second switch direction42. Preferably, as viewed in the opening direction of the second opening15, the two second elongated blocks331protrude beyond the two protruding portions16so as to be biased against and effectively restrict the driving member2in the folded position. Similarly, a distance between the two first elongated block311is smaller than or equal to the diametrical dimension of the driving member2so that the two first elongated blocks311are biased against and restrict the driving member2there between. In this embodiment, the two first elongated blocks311extend to an edge defining the second opening15so as to stably block the second blocking unit32in different depth of the inner space13. When the driving member2is in the folded position, the driving member2is abutted against an edge defining the first opening14in the first switch direction41, which provides stable positioning effect in coordination with the first blocking unit31. Similarly, when the driving member2is in the unfolded position, the driving member2is abutted against the edge defining the second opening15in the second switch direction42. The driving member2further includes an adjusting member21, a main body22and a torque adjusting rod23. The adjusting member21is pivotally connected with the handle1, and the torque adjusting rod23is rotatable relative to the main body22and disposed within an interior of the main body22. The torque adjusting rod23protrudes beyond an end surface221of the main body22along the axis24, and the adjusting member21is co-rotatably sleeved to the torque adjusting rod23so that the predetermined torque value is adjustable by rotation of the adjusting member21. Preferably, the adjusting member21, the main body22and the torque adjusting rod23are made of metal, which provides good structural strength and is durable. The adjusting member21has an embedding groove211, and an opening of the embedding groove211faces the end surface221. The end surface221has a fitting portion222disposed thereon, and the adjusting member21is axially movable relative to the torque adjusting rod23between a locked position and an unlocked position. When the adjusting member21is in the locked position, the fitting portion222is embedded within the embedding groove211, and the adjusting member21is non-rotatable relative to the main body22. When the adjusting member21is in the unlocked position, the fitting portion222is departed from the embedding groove211, and the adjusting member21is rotatable relative to the main body22. The end surface221has a first post223disposed thereon, and the adjusting member21has a second post212disposed thereon. When the adjusting member21is sleeved to the torque adjusting rod23, the first post223and the second post212are interfered with each other in a circumferential direction around the axis24, which determines a rotation direction and a maximum angle of rotation of the adjusting member21. For example, when the second post212is radially abutted against one side of the first post223, the predetermined torque value is a first torque value; and when the adjusting member21is rotated clockwise and the second post212is radially abutted against another side of the first post223, the predetermined torque value is changed to a second torque value. Preferably, two opposite sides of the adjusting member21have two cutting notches213recessed thereon, and two pivots18of the handle1are partially located within the inner space13and protrude into the two cutting notches213. Therefore, a weight of the driving member2is reduced, and the adjusting member21and the two first elongated blocks311has sufficient space therebetween, which provides smooth rotation, accurate positioning and stable restriction of the driving member2. Furthermore, the handle1further includes a first member11and a second member12, and the first member11has the inner space13, the first opening14and the second opening15disposed thereon. The second member12is openably disposed on the first member11, the first member11and the second member12define a receiving space17therebetween, and the receiving space17is configured to receive at least one driving head5, which is convenient to carry. Although particular embodiments of the invention have been described in detail for purposes of illustration, various modifications and enhancements may be made without departing from the spirit and scope of the invention. Accordingly, the invention is not to be limited except as by the appended claims. | 9,339 |
11858112 | DETAILED DESCRIPTION OF THE INVENTION FIGS.1through5show a tool10including an impacting structure in accordance with a first embodiment of the present invention. The tool10has a tool head20which has a connecting portion21for receiving a pivot and a it face22configured that the impacting structure selectively hits. The tool10further includes a shaft structure30connected to the tool head20. The shaft structure30has a connecting portion31which links with the connecting portion21of the tool head20when the shaft structure30and the tool head20are connected with each other. The shaft structure30has a body portion32. The body portion32is of hexagonal cross section. The tool10further includes a shaft structure40which can selectively hits the tool head20. i.e. a hitting portion41at an end of the shaft structure40can selectively impacts the face22. The shaft structure40is movable between a first position in which the hitting portion41contacts with the face22and a second position in which the hitting portion41is separated from the face22. The shaft structure40has a rod44. The shaft structure40is coupled to the shaft structure30. The shaft structure30is movably attached to a body portion42of the shaft structure40, The shaft structure30is movable with respect to the shaft structure40such that the tool head20is moved toward and away from the shaft structure40. The body portion42is attached to the rod44and in a form of a sleeve43. The sleeve43has a hole431receiving the shaft structure30, i.e. the body portion32. The shaft structure30is movable axially along an axis L1with respect to the shaft structure40. The body portion42is adapted to stop the shaft structure30. The body portion42abuts at least one stopping portions33of the shaft structure30, which protrudes radially outwardly from the body portion32, to stop the shaft structure30. The at least one stopping portions33includes two stopping portions33and34on opposite ends of the shaft structure30. Each of the stopping portions33and34is in a form of a protrusion and disposed outside the hole431. The stopping portions33and34are located on opposite sides of the body portion42. The stopping portion34prevents the shaft structure30detaching from the body portion42. FIGS.6through8show a tool including an impacting structure in accordance with a second embodiment of the present invention, and the same numbers are used to correlate similar components of the first embodiment, but bearing a letter a. The tool includes a shaft structure40awhich can selectively hits the tool head, i.e. a hitting portion41aat an end of the shaft structure40acan selectively impacts the face. The shaft structure40has a rod44a. The second embodiment differentiates from the first embodiment in that the shaft structure40aincludes a sleeve43ahaving a hole431areceiving a shaft structure30aand a hole432areceiving the shaft structure40a, respectively. The shaft structure40ais axially movable along an axis L2. The axis L2is parallel to the axis L1. Further, the hole432ahas at least one retaining portion433ain a form of a protrusion and the sleeve43aincludes at least one through hole434aextending radially therethrough from the hole432a. The shaft structure40ahas at least one retaining portion441ain a form of a recess. The at least one retaining portion433aand the at least one retaining portion441aselectively engage with and disengage from each other. The at least one retaining portion433a, the at least one retaining portion441a, and the at least one through hole434aare aligned. The at least one through hole434aenables the depressibility of the at least one retaining portion433a, thereby facilitating engagement and disengagement of the at least one retaining portion433aand the at least one retaining portion441a. FIGS.9and10show a tool including an impacting structure in accordance with a third embodiment of the present invention, and the same numbers are used to correlate similar components of the first embodiment, but bearing a letter b. The third embodiment differentiates from the first embodiment in that a shaft structure30bincludes a portion of a body portion32bforming a handle35b. The body portion32bextends along the axis L1, and the handle35band a shaft structure40bare on opposite sides of the axis L1. The handle35band the shaft structure40btherefore delimit a space allowing a user's fingers to insert. In the embodiment, the handle35bincludes two bending structures36band37bwhich does not extend along the axis L1. The bending structure30bhas an end adjacent to a connecting portion31b. The bending structure37bis unable to move through a sleeve43bof the shaft structure40b. Therefore, the handle35bis unable to move through the sleeve43b. In view of the foregoing, the shaft structures30and40are movably coupled to each other. Thus, when the tool head20, which is a used to pry a work piece, suffer a problem of moving with respect the work piece, the shaft structure40,40a, and40bcan be moved to counteract friction between the tool head20and the work piece, thereby moving the tool head20. The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. | 5,280 |
11858113 | Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. DETAILED DESCRIPTION FIGS.1-4illustrate a drill stand10including a base14for mounting on a mounting surface16that can be vertical (e.g. a wall) or horizontal (e.g. the ground, as shown inFIG.1). The drill stand10also includes a mast18for supporting a carriage22, and a support bracket26moveably coupled to the mast18and the base14. The mast18defines a longitudinal axis30and is pivotably coupled to the base14to pivot about a pivot joint34. The carriage22is moveably coupled to the mast18and is configured to carry a core drill36, as described in further detail below. The support bracket26is moveably coupled to the mast18via a tool-free clamping mechanism38that selectively locks the support bracket26to the mast18. As shown inFIG.3, the clamping mechanism38includes a pair of clamping arms42positioned on respective rails46on the mast18. The rails46are parallel to the longitudinal axis30of the mast18. A handle50can be rotated to tighten the clamping arms42into the rails46. Specifically, a bolt51is coupled for rotation with the handle50. The bolt51extends through and is rotatable relative to both clamping arms42and a pair of brackets53that support clamping arms42. Thus, when the handle50is rotated in tightening direction with respect to arms42(and brackets53), the bolt51rotates and forces the handle-side bracket53to move towards the non-handle-side bracket53, thus forcing the clamping arms42into the rails46, causing the support bracket26to be locked with respect to the mast18. Alternatively, the handle50can by rotated in an opposite, loosening direction, which causes the bolt51to rotate and allow the handle-side bracket53to move away from the non-handle side bracket5. In response, the clamping arms42naturally deflect outward away from rails46, thereby allowing the support bracket26to move along the mast18via the arms42sliding within the rails46. When the support bracket26is locked with respect to the mast14, an operator may grasp the support bracket26to carry the drill stand10. The base14also includes a pair of handles52on opposite sides of the base14that permit the operator to carry the drill stand10. When the clamping arms42are loosened with respect to the rails46, the mast18and support bracket22are collapsible relative to the base14, as shown inFIGS.5A and5B. In the embodiment shown inFIG.5A, as the mast18pivots about pivot joint34toward support bracket26, the support bracket26also pivots about a pivot joint54while the clamping arms42slide along the rails46of the mast18away from pivot joint34. An angle α is defined between the mast18in its collapsed position and the mast18in its original position shown inFIG.4and shown in phantom inFIG.5A. In the collapsed position, an angle β is defined between the mast18and the mounting surface16. An angle θ is defined between the support bracket26in its collapsed position and the support bracket26in its original position shown inFIG.4and shown in phantom inFIG.5A. In the embodiment illustrated inFIG.5A, α is 83 degrees, β is 7 degrees, and θ is 109 degrees. However, in other embodiments, α, β, and θ can be other values, with α and β always totaling 90 degrees. In some embodiments, once collapsed, the total vertical height55of the stand10, measured from the mounting surface16to a plane57parallel to the mounting surface16and intersecting a vertically topmost point59of the stand10while collapsed, is approximately 13.5 inches. In other embodiments, the height55is less than 13.5 inches. Alternatively, in another embodiment shown inFIG.5B, the mast18and support bracket26are movable to an alternative collapsed configuration in which at least one of the mast18or the support bracket26is substantially parallel with the base14. In the embodiment shown inFIG.5B, as the mast18pivots about pivot joint34away from support bracket26, the support bracket26also pivots about the pivot joint54while the clamping arms42slide along the rails46of the mast18toward the pivot joint34. In the embodiment shown inFIG.5B, the mast18is substantially parallel with the base14. Because the support bracket26includes one or more bolts56extending therethrough (FIG.4), the base14includes one or more recesses (not shown) to accommodate the one or more bolts56when the support bracket26has been moved to the collapsed configuration. As shown inFIGS.1-4, the base14includes a wear plate58having an elongated slot62through which a mounting bolt (not shown) may be inserted to secure the base14to the mounting surface16by, for example, setting the mounting bolt through the wear plate58and into a bore created in the mounting surface16. The base14also includes a plurality of eyelet screws66that may be threadably adjusted with respect to the base14in order to vertically adjust respective feet70attached to the screws66with respect to the base14. In the illustrated embodiment there are four screws66at four corners of the base14, but in other embodiments there may be more or fewer screws66, and the screws may be in different locations on the base14. The operator may adjust the height and orientation of the base14with respect to the mounting surface16by adjusting one or more of the screws32with respect to the base14. As shown inFIG.3, the mast18also includes a bubble level74and the base14includes a bullseye level76. Thus, if an operator mounts the base14to a vertical mounting surface16, the bubble level74can help an operator level the mast18and ensure it is parallel to the ground surface. Similarly, if an operator mounts the base14to a horizontal mounting surface16, the bullseye level76can help an operator level the base14and ensure it is parallel to the ground surface. As shown inFIG.6, a bottom side82of the base14includes a first, outer gasket86and a second, inner gasket90. The inner gasket90is arranged inside the outer gasket86and outside the slot62, such that a vacuum chamber94is defined between the first gasket86, the second gasket90, the bottom side82of the base14, and the mounting surface16when the base14is on the mounting surface16. The base14includes a quick release valve98(FIGS.1-6) and a vacuum port102(FIG.1). The vacuum port102extends from a top side106of the base14to the vacuum chamber94. The quick release valve98extends from a side wall108of the base14to the vacuum chamber94. Thus, when the first and second gaskets86,90are engaged against the mounting surface16, the operator may attach a vacuum source to the vacuum port102on the top side106of the base14, and operate the suction source to create a vacuum in the vacuum chamber94. In this manner, the vacuum in the vacuum chamber94secures the base14to the mounting surface16. When the operator desires to remove the base14from the mounting surface16, the operator can actuate the quick release valve98, causing ambient air at atmospheric pressure to enter the vacuum chamber94, which breaks the vacuum and allows the operator to remove the base14. As shown inFIGS.1-4, the carriage22includes an annular collar110for securing the core drill36. The collar110includes a fixed end111and a moveable end112. A gap113is defined between the two ends111,112. A handle114is arranged on and rotatable with respect to the fixed end111. A fastener115(FIG.6) is coupled for rotation with the handle114and extends through and is rotatable with respect to the ends111,112. When the handle114is rotated in a tightening direction, the fastener115rotates in a direction towards the carriage22, forcing the moving end112closer to the fixed end111, reducing the gap113and thereby securing the core drill36. When the handle114is rotated in an opposite, loosening direction, the fastener115rotates away from the carriage22, allowing the moving end112to deflect away from the fixed end11, increasing the size of the gap113and thereby allowing the core drill36to be removed from the collar110. The collar110can be alternatively tightened and loosed by rotating a handle114, thus allowing an operator to selectively secure (FIG.1) and remove (FIGS.2-5) the core drill36from the collar110. As shown inFIGS.7and8, the carriage22includes a spindle assembly118for moveably adjusting the carriage22along the mast18, and a handle assembly122for driving the spindle assembly118. The spindle assembly118includes a pinion126that is drivingly engaged with a rack130included on the mast18. As described in further detail below, the handle assembly122is removably coupled to either end of the spindle assembly118without the use of tools. The handle assembly122can be interchangeably coupled to the spindle assembly118on either a first side134of the carriage22or an opposite second side138of the carriage22. In this manner, an operator may attach the handle assembly122select to either one of the sides134,138, depending on user preference or work environment constraints. With reference toFIGS.1-3,7and8, the handle assembly122is shown positioned on the second side138of the carriage22. However, as shown inFIGS.7and8, a second instance of the handle assembly122is shown in phantom on the first side134of the carriage22to illustrate its alternative position. With continued reference toFIGS.7and8, the spindle assembly118includes a first spindle142proximate to and accessible from the first side134of the carriage22and a second spindle146proximate to and accessible from the second side138of the carriage22. The spindle assembly118includes a first bushing150positioned around the first spindle142and a second bushing154positioned around the second spindle146. The bushings150,154rotatably support the spindles142,146, respectively, and are interference fit to the carriage22, preventing the bushings150,154themselves from rotating. The second spindle146includes a threaded shank158received within a threaded bore162in the first spindle142to thereby unitize the spindles142,146for co-rotation. Alternatively, the spindles142,146may be non-rotatably and axially coupled in different manners. The second spindle146also includes a cylindrical portion166upon which the pinion126is press fit. Thus, the pinion126co-rotates with the second spindle146in response to a torque input to either of the spindles142,146via the handle assembly122, causing the carriage22to move up and down the mast18. The spindle assembly118also includes a brake mechanism168that prevents the spindles142,146and pinion126from rotating when the operator is not holding the handle assembly122. A plurality of washers170are positioned around the cylindrical portion166between the first spindle142and the pinion126. In the embodiment illustrated inFIGS.7and8, the brake mechanism168comprises one or more Belleville washers amongst the plurality of watchers170and a bushing172fixed within carriage22. The one or more Belleville washers exert a predetermined axial preload force on the pinion126in a direction away from bushing172, such that the second spindle146is likewise biased in the same direction. Because the first spindle142is threadably coupled to the second spindle146via the threaded shank158within the threaded bore162, the first spindle142is pulled by the second spindle146against the bushing172, creating friction therebetween. The friction between the first spindle142and the bushing172is sufficiently high to prevent to prevent the spindles142,146from rotating due to the weight of the carriage22or core drill36pulling down on the carriage22and pinion126when the operator is not holding the handle assembly122or carriage22. However, the friction between the first spindle142and the bushing172is sufficiently low that when an operator applies torque to the spindles142,146via the handle assembly122, the first spindle142is able to rotate relative to the bushing172, along with the second spindle146and pinion126. Thus, the brake mechanism168prevents the carriage22from moving downward along the mast18due to the force of gravity absent the operator applying a force via the handle assembly122, but permits the operator22to move the carriage22along the mast18by rotating the handle assembly122, as described in further detail below. The first spindle142defines a first non-cylindrical drive socket174(FIG.8) accessible from the first side134of the main carriage22and the second spindle146defines a second non-cylindrical drive socket178(FIGS.7and8) accessible from the second side138of the carriage22. The drive sockets174,178are each operable to receive a corresponding-shaped drive member182of the handle assembly122. In the illustrated embodiment of the drill stand10, the drive sockets174,178and the drive member182each have a corresponding square cross-sectional shape. Alternatively, the drive sockets174,178and the drive member182may be configured having different corresponding non-cylindrical shapes. The handle assembly122also includes a handle hub186from which the drive member182extends and two levers190extending from opposite sides of the handle hub186. The handle assembly122further includes quick-release mechanism194for selectively locking the handle assembly122to the spindle assembly118. In the illustrated embodiment, the quick-release mechanism194includes a ball detent198in one of the faces of the drive member182and a plunger202coaxial with the hub186and drive member182for biasing the ball detent198toward a position in which at least a portion of the ball detent198protrudes from the face of the drive member182in which it is located (i.e., an extended position). In the illustrated embodiment, the plunger202is coupled for axial movement with a release actuator204arranged on handle hub186. The release actuator204defines a slot206through which an extension208coupling the two levers190extends. The release actuator204is biased away from the drive member182by a spring210, which is set between the drive member182and release actutator204. The slot206is long enough to permit the release actuator204to move within the handle hub186between an outwardly-biased position and an inwardly-depressed position, against the force of spring210. As shown inFIGS.7and8, the plunger202includes a notch212. When the release actuator204, and therefore the plunger202, are depressed inwardly by an operator, the plunger202moves towards the ball detent198, thus allowing the ball detent198to be received into the notch212. When the release actuator204and the plunger202are allowed to return to their outwardly-biased positions by the spring210, a ramp surface214on the plunger202adjacent the slot212displaces the ball detent198radially outward, causing a portion of the ball detent198to protrude from the drive member182and engage a corresponding detent recess218in the drive sockets174,178(FIGS.7and8), thereby axially retaining the handle assembly122to the spindle assembly118. In operation, an operator depresses and holds the release actuator204and while holding the release actuator204, the operator couples the handle assembly122to the spindle assembly118by inserting the square drive182into either the first drive socket174or the second drive socket178. Once inserted, the operator releases the release actuator204, which causes the ramp surface214to force the ball detent198into the detent recess218of the first drive socket174or second drive socket178, thereby axially retaining the handle assembly122to the spindle assembly118. The operator then rotates the handle assembly122to reposition the carriage22with respect to the mast18. To remove the handle assembly122for storage or for repositioning to the other side of the carriage22, the operator depresses the release actuator204against the bias of the spring206, moving the plunger202into a position in which the ball detent is received into the notch212. With the ball detent198in the notch212, no portion of the ball detent198protrudes from the drive member182for engaging the detent recesses218, thereby permitting removal of the handle assembly122from either the first drive socket174or the second drive socket178. To reattach the handle assembly122to either side of the spindle assembly118, the operator needs only to push the drive member182into one of the drive sockets174,178. Interference between the ball detent198and the drive sockets174,178displaces the ball detent198inward. A component of the ball detent198displacement is redirected axially by the ramp surface214, against the bias of the spring206, causing the plunger202to automatically retract into the hub186during insertion of the drive member182into one of the drive sockets174,178. Upon receipt of the ball detent198into one of the detent recesses218in the drive sockets174,178, the handle assembly122is again locked to the spindle assembly118. With reference toFIGS.1-4, the mast18defines grooves216that are parallel to the longitudinal axis30and arranged on opposite sides of the mast18. In the illustrated embodiment the mast18includes two grooves216but in other embodiments the mast18may include more or fewer grooves216. The carriage22includes rollers220arranged in the grooves216. In the illustrated embodiment, the carriage22includes four rollers220, but in other embodiments, the carriage22may include more or fewer rollers220. In response to the carriage22moving relative to the mast18in a direction parallel to the longitudinal axis30, as described above, the rollers roll along the grooves216, thus facilitating smooth translation of the carriage22along the mast18. Because the grooves216extend all the way to a top236of the mast18, the carriage22is removable from the mast18in a direction parallel to the longitudinal axis30in a tool-free manner. Specifically, an operator may simply slide the carriage22off the top of the mast18(FIG.4), because nothing at the top236of the mast18blocks or otherwise prevents the rollers220from rolling off the grooves216or the pinion126from disengaging the rack130. The capability to remove the carriage22from the mast18in a tool-free manner simplifies disassembly and removal of the drill stand10from the work site. In other embodiments, the carriage22may be removable from the mast18in a direction transverse to the longitudinal axis30. As shown inFIGS.1-4, the drill stand10includes a battery mount240that selectively receives a battery244(FIG.1) that can power the core drill36. In the illustrated embodiment, the battery mount240is attached to the mast18within a space bounded by the mast18, the support bracket26, and the base14. Therefore, the battery244is positioned within this same space when not in use. In other embodiments (not shown), the battery mount240may be arranged on the support bracket26but within the same space bounded by the mast18, the support bracket26, and the base14to provide protection for the battery244. As shown inFIGS.1-4, the battery mount240is a bracket with a C-shaped cross section that receives a mating portion of the battery244. Because the drill stand10includes the battery mount240for the battery244, the operator can always have a spare (charged) battery244ready for the core drill26in case the battery244on the core drill26requires recharging. Various features of the invention are set forth in the following claims. | 19,620 |
11858114 | DETAILED DESCRIPTION Reference now will be made in detail to embodiments of the present invention, one or more examples of which are illustrated in the drawings. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations. Moreover, each example is provided by way of explanation, rather than limitation of, the technology. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present technology without departing from the scope or spirit of the claimed technology. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure covers such modifications and variations as come within the scope of the appended claims and their equivalents. The detailed description uses numerical and letter designations to refer to features in the drawings. Like or similar designations in the drawings and description have been used to refer to like or similar parts of the invention. As used herein, the terms “first”, “second”, and “third” may be used interchangeably to distinguish one component from another and are not intended to signify location or importance of the individual components. The singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. The terms “coupled,” “fixed,” “attached to,” and the like refer to both direct coupling, fixing, or attaching, as well as indirect coupling, fixing, or attaching through one or more intermediate components or features, unless otherwise specified herein. As used herein, the terms “comprises,” “comprising,” “incudes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of features is not necessarily limited only to those features but may include other features not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive- or and not to an exclusive- or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Terms of approximation, such as “about,” “generally,” “approximately,” or “substantially,” include values within ten percent greater or less than the stated value. When used in the context of an angle or direction, such terms include within ten degrees greater or less than the stated angle or direction. For example, “generally vertical” includes directions within ten degrees of vertical in any direction, e.g., clockwise or counter-clockwise. Benefits, other advantages, and solutions to problems are described below with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. In general, the present invention is directed to a carrying case for a power tool. The carrying case can include a plurality of distinct compartments within an interior volume of the carrying case. Each distinct compartments can be configured to receive an accessory for the power tool. For instance, the distinct compartments can be configured to receive a battery-powered power tool, a battery configured to power the battery-powered power tool, and a battery charger for the battery, respectively. The carrying case can include a battery receiver configured for locking engagement with the battery. The present inventors have found that the carrying case of the present invention can provide secure and safe storage and transportation of a battery-powered power tool and its associated accessories including the battery and battery charger for the power tool. Referring now to the drawings,FIG.1illustrates a perspective view of the carrying case100of the present invention. The carrying case100includes a top102, a bottom104generally opposite the top102, a front106, a back108generally opposite the front106, a first side110, and a second side112generally opposite the first side110. An interior volume114of the carrying case100is defined within the top102, bottom104, front106, back108, first side110and second side112. The carrying case100may include one or more handles, e.g., a handle120on the top102and/or one or more handles200on the first side110and second side112. The top102of the carrying case100, as best seen inFIG.1andFIG.2, can include a top panel116. The top panel116may have an exterior surface118and an interior surface122. A handle, e.g., the handle120, may be provided at the top102. For instance, the handle120can be coupled, e.g., pivotably coupled, to the exterior surface118of the top panel116. The top panel116can function as a door to provide access to the interior volume114of the carrying case100. The top panel116can include one or more coupling members such as hinges132, e.g., two sets of hinges132as shown inFIGS.3-4, by which the top panel116can pivotably open to provide access to the interior volume114of the carrying case100. The hinges132can be provided along an edge of the top panel116adjacent to the back108of the carrying case100and can be coupled to a complementary set of hinges on the back108, as will be described in further detail below. Additionally, the top panel116can be locked in a closed position, e.g., by one or more latches134. The latches134can be coupled to complementary latch receiver(s)188. As shown inFIG.1, the latches134can be on a different side of the top panel116than the hinges132. For instance, the latches134can be disposed adjacent to the front106of the carrying case100opposite the back108of the carrying case100. Thus, when the latches134are engaged with the latch receiver(s)188, the top panel116may be locked in a closed position and the interior volume114of the carrying case100may be enclosed. As shown inFIG.3andFIG.11, the front106of the carrying case100can include a first front panel164and a second front panel176. The first front panel164can have an exterior surface166and an interior surface168which each extend from a bottom edge170to a top edge172. The bottom edge170of the first front panel164can be disposed adjacent to the bottom104of the carrying case100, and the top edge172of the first front panel164can be spaced from both the bottom104and the top102of the carrying case100. The second front panel176can have an exterior surface178and an interior surface180which each extend from a top edge182to a bottom edge184. The top edge182of the second front panel176can be disposed adjacent to the top102of the carrying case, and the bottom edge184of the second front panel176can be spaced from both the bottom104and the top102of the carrying case100. As shown inFIG.2, the top edge172of the first front panel164can be disposed adjacent to the bottom edge184of the second front panel176. For instance, the top edge172of the first front panel164can abut the bottom edge184of the second front panel176such that the first front panel164and the second front panel176together form the front106of the carrying case100. Additionally, as described briefly above, the second front panel176can include one or more latch receivers188adjacent to the top edge182that are configured to receive the one or more latches134to secure the top panel116and close the carrying case100. As best seen inFIG.11, the interior surface180of the second front panel176can include one or more recesses190separated by at least one protruding wall portion192. The protruding wall portions192can have a panel thickness extending from the exterior surface178to the interior surface180that is greater than a panel thickness from the exterior surface178to the interior surface180of any of the recesses190. The recesses190can be provided as at least a portion of a compartment to hold accessories within the interior volume114, and the protruding wall portion192can divide or separate the respective compartments. Further, as illustrated inFIG.3andFIG.11, the first front panel164and the second front panel176may be pivotably coupled together. For instance, the first front panel164can have one or more hinges174along the top edge172of the exterior surface166thereof and the second front panel176can have one or more hinges186along the bottom edge184of the exterior surface178thereof that complementarily couple to the one or more hinges174of the first front panel164. As such, the second front panel176can pivot open to provide access to the interior volume114of the carrying case100. As shown inFIG.3, the latches134of the top panel116and the complementary latch receivers188extend onto the front106of the carrying case100. Thus, the second front panel176is prevented from pivoting open along the hinges186when the top panel116is closed and the latches134are engaged. However, as shown inFIG.11, when the top panel116is in an open configuration, the second front panel176can be pivotably opened along the hinges186away from the interior volume114of the carrying case100to provide additional access to the interior volume114, e.g., to more easily access items stored within the interior volume114. FIG.4illustrates a rear view of the carrying case100, i.e., a view of the back108of the carrying case100. The back108includes a back panel152which includes an exterior surface154and an interior surface156. The back panel152can include one or more hinges160that are complementary to the hinges132of the top panel116such that the top panel116is pivotably coupled to the back panel152. FIG.5illustrates a bottom view of the carrying case100, i.e., a view of the bottom104of the carrying case100. The bottom104includes a bottom panel136including an exterior surface138and an interior surface140(shown inFIG.8and described in further detail below). The bottom panel136extends from the front106to the back108of the carrying case100. Further, as shown inFIG.5, at least a portion of a first side panel194and a second side panel230are disposed along the bottom104of the carrying case100. Turning toFIG.6, a view of the interior surface140of the bottom panel136is provided. The interior surface140includes a power tool compartment base142configured to receive a power tool; a charger compartment base144configured to receive a battery charger for the power tool; a battery compartment base146configured to receive a battery for the power tool; and an accessory compartment base148configured to receive at least one additional accessory of the power tool. Protruding sections150of the interior surface140can divide each of the compartment bases142,144,146,148. The bottom panel136can have a bottom panel thickness extending between the exterior surface138and the interior surface140. The bottom panel thickness of the protruding sections150is greater than the bottom panel thickness of any of the compartment bases142,144,146,148. FIG.7shows a left side view of the carrying case100, i.e., a view of the first side110of the carrying case100. The first side110can include a first side panel194which can have an exterior surface196and an interior surface198. A handle200may be formed in the exterior surface196of the first side panel194. The first side panel194can extend from a top edge206to a bottom edge208. The top edge206can be adjacent to the top panel116of the carrying case100and the bottom edge208can be adjacent to the bottom104of the carrying case100. The first side panel194can include a channel210configured to receive a portion of the power tool to enable the portion of the power tool to extend outside the interior volume114of the carrying case100. The channel210can be a generally U-shaped channel as shown inFIG.7andFIG.8A. The channel210can extend from the top edge206of the first side panel194toward the bottom edge208of the first side panel194. The channel210can be spaced from the bottom edge208of the first side panel194. However, the present invention contemplates alternative embodiments of the channel, e.g., extending from the bottom edge208toward the top edge206, or extending from a side toward an opposite side, so long as the channel210can be enclosed at at least one end thereof. The channel210can include a gasket212surrounding the channel210configure to provide a cushioned, sealable, padded, or otherwise protected surrounding for the channel210. As shown in at leastFIGS.1and7, a protective cover for a portion of the power tool such as a scabbard226can extend through the channel210. As shown inFIGS.7and8B, a cover214can be provided to cover or enclose at least a portion of the channel210, e.g., a portion of the channel210through which the scabbard226does not extend when the power tool is placed within the carrying case100. The cover214may be removably coupled to the first side panel194. The cover214can include a main body216configured to cover or enclose a portion of the channel210, and one or more extending wings218configured to extend beyond a width of the channel210to assist with securing the cover214to the first side panel194. For instance, a first extending wing may extend towards a first, e.g., right, side of the channel210and a second extending wing218may extend towards a second, e.g., left, side of the channel210. Each of the extending wings218can include a distal elongated portion220configured to extend toward the top edge206and the bottom edge208of the first side panel194relative to the extending wings218. The exterior surface196of the first side panel194can include complementary recesses224on either side of the channel210to receive the extending wings218and distal elongated portions220of the cover214. Further, the main body216of the cover214can include a recessed portion222forming a groove at a top side or top edge thereof configured to be enclosed by the top panel116of the carrying case100when the carrying case100is in a closed arrangement. However, it is to be understood that the particular geometry of the cover214, including extending wings218, distal elongated portions220, and recessed portion222, is not limited to that which is illustrated inFIGS.7,8A and8Band the cover214can be formed from any suitable geometric shape(s) that cover or enclose an open portion of the channel210and can be secured to the first side panel194. Additionally, the present invention contemplates the use of one or more fasteners such as a latch, lock, pin, screw, or other suitable fastener not formed as part of the first side panel194to secure the cover214in place relative to the channel210. FIGS.9and10A-B illustrate the second side112of the carrying case100. The second side112includes a second side panel230which can include an exterior surface232and an interior surface234. The exterior surface232can have a handle200formed thereon, e.g., opposite the handle200of the first side panel194. The interior surface234of the second side panel230can include at least one protruding portion236and at least one recessed portion238, as best seen in the top-down view shown inFIG.10B. The second side panel230has a second side panel thickness extending from the exterior surface232to the interior surface234. The second side panel thickness of the protruding portion236can be greater than the second side panel thickness of the recessed portion238. The recessed portion238can form at least a part of an accessory compartment of the carrying case100. For instance, the recessed portion238can be aligned with the battery charger compartment base144of the bottom panel136to form at least a portion of a compartment for a battery charger compartment within the interior volume114. The protruding portion236of the interior surface234of the second side panel230can be aligned with the battery compartment base146of the bottom panel136to form at least a portion of a compartment for a battery compartment within the interior volume114. As illustrated inFIGS.10A and10B, the protruding portion236can be provided with a battery receiver240. The battery receiver240can be configured to receive, e.g., couple to and/or secure in place, a battery for the power tool. The battery receiver240can include a “common foot”242configured to complementarily receive the battery. The common foot242can include a pair of mounting rails including a first rail244and a second rail246extending generally parallel to each other. The first rail244and the second rail246can extend in a vertical direction, e.g., generally perpendicular to the bottom104of the carrying case100. Each of the first rail244and the second rail246can include a first wall248extending perpendicular to the interior surface234and a second wall250extending from the first wall248and extending generally perpendicular to the first wall248, forming a generally L-shaped arrangement. In this arrangement, each second wall250can extend generally parallel to the interior surface234along the protruding portion236. A receiving groove252can be formed in a space between the protruding portion236, first wall248and second wall250of each of the first rail244and the second rail246. In use, a battery24(seen inFIG.11) can be inserted into a battery compartment of the interior volume114of the carrying case100by sliding a battery housing26into the battery receiver240. Specifically, the battery housing26can have a complementary pair of mounting rails32formed on a side thereof that are configured to be received within the receiving grooves252of the first rail244and second rail246of the common foot242. Further, the battery housing26can include a latch28attached thereto, e.g., above the complementary rails along the same side of the battery housing26. The latch28can be received within a latch receptacle254of the battery receiver240in order to secure the battery24in place relative to the battery receiver240. In this manner, the battery24can be received by both the battery compartment base146of the bottom panel136and secured within the interior volume114to the battery receiver240to prevent movement of the battery24within the carrying case100, thereby protecting the battery24from potential damage. As illustrated inFIG.11, the carrying case100can be configured to carry a power tool. The power tool can be, e.g., a chain saw10such as a battery-powered chain saw. The chain saw10can have a motor housing12, a guide bar and saw chain14extending from the motor housing12, handles such as a top handle16and a side handle18, a hand guard20and a battery receiver22. The guide bar and saw chain14can be enclosed within the scabbard226to provide protection to the guide bar and saw chain14from damage and to prevent the saw chain from causing any damage to a user or objects nearby. The hand guard20can extend upward from the motor housing12of the chain saw10. The hand guard20may be operably coupled to an emergency brake for the chain saw10, e.g., when the hand guard20is rotated forward toward the guide bar and saw chain14. When the hand guard20and emergency brake are disengaged, i.e., the hand guard20is in its upward position, the hand guard20may extend above the front106and back108of the carrying case100such that the top102cannot be closed. When the hand guard20and emergency brake are engaged, i.e., the hand guard20is rotated forward toward the first side110of the carrying case100and toward the guide bar and saw chain14, the hand guard20may be in a sufficiently compact position to remain within the interior volume114of the carrying case100such that the top panel116can be closed over the hand guard20. The chain saw10may be a battery powered chain saw as described above. A battery charger36for the battery24of the chain saw10can be disposed adjacent to the battery receiver22of the chain saw10within the interior volume114, e.g., sitting in the battery charger compartment base144of the bottom panel136. Further, the chain saw10may require lubricating oil. A bottle38of lubricating oil may be received in the interior volume114, e.g., sitting in the accessory compartment base148of the bottom panel136. Moreover, as shown inFIG.11, when the chain saw10, battery24, battery charger36and bottle38are stored within the interior volume114, there can be additional space within the interior volume114in which additional accessories can be stored. For instance, gloves, personal protective equipment such as protective eyewear, ear protection, or any other desirable accessories may be stored within the interior volume114. As described above, the second front panel176and the top panel116can be pivotably closed to enclose the interior volume114of the carrying case100and secure the power tool and accessories for storage and/or transport. A power tool kit2may include the power tool, e.g., chain saw10, a battery24configured to power the power tool, a battery charger36for the battery, one or more accessories for the power tool such as a bottle of lubricating oil, and a carrying case100for storage and/or transport of the each of the power tool, battery, battery charger, and one or more accessories. The power tool, battery, battery charger and one or more accessories can be disposed within the carrying case100to form the kit2. Further aspects of the invention are provided by one or more of the following embodiments:Embodiment 1: A carrying case for a power tool, comprising: an interior volume defined by a top, a bottom, a front, a back, a first side, and a second side; a plurality of distinct compartments within the interior volume, wherein the distinct compartments are configured to receive a battery-powered power tool, a battery configured to power the battery-powered power tool, and a battery charger for the battery, respectively; and a battery receiver configured for coupling engagement with the battery, wherein the battery receiver is formed on an interior surface of at least one of the front, back, first side, or second side.Embodiment 2. The carrying case of any one or more of the embodiments, wherein the battery receiver comprises first and second mounting rails configured to complement a complementary pair of mounting rails of the battery, and a latch receptacle configured to receive a latch of the battery.Embodiment 3. The carrying case of any one or more of the embodiments, wherein the first and second mounting rails each comprise an L-shape and a channel configured to receive the complementary pair of mounting rails of the battery.Embodiment 4. The carrying case of any one or more of the embodiments, wherein the battery receiver is adjacent to the compartment configured to receive the battery charger along the second side; wherein the second side comprises a second side panel having a protruding portion comprising the battery receiver, the protruding portion having a thickness extending from an exterior to an interior surface of the second side panel, and a recessed portion having a thickness extending from an exterior to an interior surface of the second side panel, wherein the thickness of the protruding portion is greater than the thickness of the recessed portion such that the battery receiver protrudes into the interior volume relative to the compartment configured to receive the battery charger.Embodiment 5. The carrying case of any one or more of the embodiments, wherein the front comprises a first panel and a second panel pivotably coupled together, wherein each of the first panel and the second panel extend between the first side and the second side.Embodiment 6. The carrying case of any one or more of the embodiments, wherein the first panel is adjacent to the bottom and the second panel is adjacent to the top.Embodiment 7. The carrying case of any one or more of the embodiments, further comprising at least one additional compartment configured to receive a power tool accessory.Embodiment 8. The carrying case of any one or more of the embodiments, wherein the power tool accessory is a bottle of fluid.Embodiment 9. The carrying case of claim1, wherein the bottom comprises a bottom panel, wherein each distinct compartment is defined in part by at least one recess in the bottom panel specific to each respective compartment, further wherein the recesses are separated by portions of the bottom panel having a panel thickness that is greater than a panel thickness of the recesses.Embodiment 10. The carrying case of any one or more of the embodiments, wherein the power tool is a chain saw.Embodiment 11. The carrying case of any one or more of the embodiments, wherein the first side comprises a first side panel that extends from the front to the back, wherein the first side panel comprises a channel configured to receive a portion of the power tool that extends outside the interior volume, wherein the channel is spaced from both the front and the back.Embodiment 12. The carrying case of any one or more of the embodiments, wherein the channel further comprises a gasket.Embodiment 13. The carrying case of any one or more of the embodiments, wherein the first side further comprises a cover removably coupled to the first side panel, wherein the cover is configured to cover at least a portion of the channel; wherein an exterior surface of the first side panel comprises at least one recess configured to receive a coupling member of the cover.Embodiment 14. The carrying case of any one or more of the embodiments, wherein the cover further comprises a groove at a top side thereof configured to receive a portion of a top panel of the top side of the carrying case.Embodiment 15. A kit for transporting a power tool comprising: a battery-powered power tool; a battery configured to power the battery-powered power tool; a battery charger for the battery; and a carrying case comprising: an interior volume defined by a top, a bottom, a front, a back, a first side, and a second side; a plurality of distinct compartments within the interior volume, wherein the distinct compartments are configured to receive a battery-powered power tool, a battery configured to power the battery-powered power tool, and a battery charger for the battery, respectively; and a battery receiver configured for locking engagement with the battery, wherein the battery receiver is formed on an interior surface of at least one of the front, back, first side, or second side; wherein the battery-powered power tool, the battery, and the battery charger are disposed within the carrying case.Embodiment 16. The kit of any one or more of the embodiments, wherein the power tool is a chain saw.Embodiment 17. The kit of any one or more of the embodiments, further comprising at least one accessory for the power tool.Embodiment 18. The kit of any one or more of the embodiments, wherein the at least one accessory comprises a bottle of fluid.Embodiment 19. The kit of any one or more of the embodiments, wherein at least a portion of the power tool extends outside the interior volume when the carrying case is in a closed arrangement.Embodiment 20. The kit of any one or more of the embodiments, further comprising a scabbard configured to cover the portion of the power tool that extends outside the interior volume of the carrying case. This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. | 28,257 |
11858115 | DETAILED DESCRIPTION OF THE EMBODIMENTS In order to provide a clear understanding of the objects, features, and advantages of the embodiments, the following are detailed and complete descriptions to the technological solutions adopted in the embodiments. Obviously, the descriptions are part of the whole embodiments. The other embodiments which are not processed creatively by technicians of ordinary skills in the field are under the protection of this disclosure. The same is given with reference to the drawings and specific embodiments. It should be noted that non-conflicting embodiments in the disclosure and the features in the embodiments may be combined with each other without conflict. In the following description, numerous specific details are set forth in order to provide a full understanding of the disclosure. The disclosure may be practiced otherwise than as described herein. The following specific embodiments are not to limit the scope of the disclosure. Unless defined otherwise, all technical and scientific terms herein have the same meaning as used in the field of the art as generally understood. The terms used in the disclosure are to describe particular embodiments and are not intended to limit the disclosure. The disclosure, referencing the accompanying drawings, is illustrated by way of examples and not by way of limitation. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.” Referring toFIG.1toFIG.6, an assembling and disassembling-facilitated tool rack includes a base10, several supporting rods20, a top plate30, several supporting plates60, and screws40. The base10is provided with several first threaded holes11. Bottoms of the supporting rods20are provided with threaded columns21; tops of the supporting rods20are provided with mounting platforms22which are provided with second threaded holes23; the several supporting rods20are connected in sequence from top to bottom; the threaded columns21of the supporting rods20on the bottommost layer are in threaded connection to the first threaded holes11; and the threaded columns21of the supporting rods20on adjacent upper layers are in threaded connection to the second threaded holes23of the supporting rods20on adjacent lower layers. The top plate30is provided with first through holes31used for plugging the mounting platforms22. The supporting plates60are provided with second through holes61used for plugging the mounting platforms22; and the supporting plates60are clamped between adjacent upper and lower supporting rods20. The screws40are in threaded connection to the second threaded holes23of the supporting rods20on the uppermost layer; and the top plate30is clamped between the screws40and the supporting rods20on the uppermost layer. By the arrangement of the above structure, during assembling, the threaded columns at the bottoms of the supporting rods are in threaded connection to the first threaded holes, thus fixing the supporting rods to the base; the supporting plates are then aligned with the supporting rods, so that the mounting platforms on the supporting rods are plugged into the second through holes, which can effectively limit the supporting plates; at this time, side walls of the mounting platforms are abutted against inner walls of the second through hole to prevent relative movement between the supporting plates and the supporting rods, which facilitates further mounting; the threaded columns of other supporting rods are then threaded connection to the second threaded holes; at this time, the upper and lower adjacent supporting rods are respectively abutted against upper and lower surfaces of the supporting plates to further position the supporting plates, which makes the connection more stable; the screws are in threaded connection to the second threaded holes; at this time, lower surfaces of screw heads are abutted against an upper surface of the top plate, and the upper surfaces of the supporting rods adjacent to the mounting platforms are abutted against a lower surface of the top plate, so that the top plate is clamped between the screws and the supporting rods, and can be thus fixed; a user conveniently carries out assembling; the top plate can be effectively fixed; and the stability of the tool rack is improved. The top plate30is further provided with first accommodating through holes32that run through the top plate30; and the first accommodating through holes32are used for plugging tools. By the arrangement of the above structure, during use, a user can plug tools and parts such as a screwdriver, a sleeve, a shock absorber and an adapter into the first accommodating through holes. Preferably, the first accommodating through holes may have different sizes, such as a diameter of 12 MM. The first accommodating through holes with this aperture can be used for plugging the tools and parts such as a screwdriver, a sleeve and a shock absorber. The diameters of some of the first accommodating through holes may be 17 MM, so these first accommodating through holes can be used for plugging the tools and parts such as an adapter. The user can plug products with different diameters into the first accommodating through holes with the corresponding apertures according to needs. It is convenient for the user to collect and use various tools and parts. The product convenience and adaptability are improved, and the use experience of the user is enhanced. The supporting plates60are further provided with second accommodating through holes62that run through the supporting plates60; the second accommodating through holes62correspond to the first accommodating through holes32; and the second accommodating through holes62are used for plugging tools passing through the first accommodating through holes32. By the arrangement of the above structure, the second accommodating through holes correspond to the first accommodating through hole, which can further support the tools passing through the first accommodating through holes and improve the stability of the tool rack. Two groups of the second accommodating through holes62on the upper and lower adjacent supporting plates60correspond to each other one by one. By the arrangement of the above structure, the first accommodating holes on the upper and lower adjacent top plates correspond to each other. When tools and parts such as a screwdriver, a sleeve, a shock absorber and the adapter are plugged into the first accommodating through holes, the upper and lower distributed first accommodating through holes respectively support different portions of the above-mentioned tools and parts, so that the above-mentioned tools and parts can be fixed more stably, and the stability of the tool rack is improved. An upper surface of the base10is provided with first accommodating grooves12. By the arrangement of the above structure, the first accommodating grooves are used for accommodating parts such as a screw and a screw cap. During use, a user can place a part in each first accommodating groove, so that parts such as a screw and a screw cap can be effectively collected, and accidental loss of the parts can be prevented; meanwhile, the user can also conveniently take the parts; the convenience of the tool rack is improved; and the loss of the user can be effectively reduced, and the working efficiency of the user is improved. A magnetic layer50is attached into each of the first accommodating grooves12. By the arrangement of the above structure, the magnetic layer is attached into each of the first accommodating grooves, so that parts such as a screw and a screw cap placed in the first accommodating grooves can be effectively sucked, and the above-mentioned parts are prevented from falling off and are collected and stored more effectively. The stability and convenience of the tool rack are improved, and a user picks and places the parts conveniently. An upper surface of the base10is provided with second accommodating grooves13; and the second accommodating grooves13correspond to the first accommodating through holes32. By the arrangement of the above structure, the second accommodating grooves correspond to the first accommodating through holes. During use, parts such as a reamer and a bearing disassembler pass through the first accommodating through holes and are plugged into the second accommodating grooves; upper and lower portions of parts such as a reamer and a bearing disassembler are respectively abutted against the first accommodating through holes and the second accommodating grooves, so that the parts such as a reamer and a bearing disassembler are placed on the took rack more stably; the stability of the tool rack is improved; the parts such as a reamer and a bearing disassembler on the tool rack are prevented from falling off; and the use experience of a user is enhanced. An upper surface of the base10is provided with third accommodating grooves14. By the arrangement of the above structure, the upper surface of the base is provided with several third accommodating grooves which are used for accommodating parts and tools such as a screwdriver head. During use, a user can replace a screwdriver head at any time according to a need, so as to use a screwdriver tool more conveniently. Different types of screwdriver heads can be plugged in the third accommodating grooves. On the one hand, the screwdriver heads can be conveniently collected; and on the other hand, the user can also be allowed to conveniently use the suitable screwdriver heads, so that the convenience of the tool rack is improved. Third accommodating through holes33are also formed in the top plate30. By the arrangement of the above structure, the third accommodating through holes are used for accommodating tools such as nipper pliers, diagonal pliers and shock-proof pliers. Preferably, the third accommodating through holes may be rectangular. During use, side walls of the tools such as nipper pliers, diagonal pliers and shock-proof pliers are respectively abutted against side walls of the third accommodating through holes, so that the tools such as nipper pliers, diagonal pliers and shock-proof pliers can be stably fixed, and a user can conveniently use these tools. The upper surface of the base10is provided with fourth accommodating grooves15; and the fourth accommodating grooves15correspond to the third accommodating through holes33. By the arrangement of the above structure, heads of the tools such as nipper pliers, diagonal pliers and shock-proof pliers pass through the third accommodating through holes and are abutted against the fourth accommodating grooves, so that the tools such as nipper pliers, diagonal pliers and shock-proof pliers are stressed more uniformly and are placed more stably. The stability of the tool rack can be effectively improved, and the use experience of a user is enhanced. Referring toFIG.1toFIG.6, an assembling and disassembling-facilitated tool rack includes a base10, supporting rods20, a top plate30, supporting plates60, and screws40. The base10is provided with several first threaded holes11. Bottoms of the supporting rods20are provided with threaded columns21; tops of the supporting rods20are provided with mounting platforms22which are provided with second threaded holes23; and the threaded columns21are in threaded connection to the first threaded holes11. The top plate30is provided with first through holes31used for plugging the mounting platforms22. The screws40are in threaded connection to the second threaded holes23; and the top plate30is clamped between the screws40and the supporting rods20. By the arrangement of the above structure, during assembling, the threaded columns at the bottoms of the supporting rods are in threaded connection to the first threaded holes, thus fixing the supporting rods to the base; the top plate is then aligned with the supporting rods, so that the mounting platforms on the supporting rods are plugged into the first through holes, which can effectively limit the top plate; at this time, side walls of the mounting platforms are abutted against inner walls of the first through hole to prevent relative movement between the top plate and the supporting rods, which facilitates further mounting; the screws are in threaded connection to the second threaded holes; at this time, lower surfaces of screw heads are abutted against an upper surface of the top plate, and the upper surfaces of the supporting rods adjacent to the mounting platforms are abutted against a lower surface of the top plate, so that the top plate is clamped between the screws and the supporting rods, and can be thus fixed; a user conveniently carries out assembling; the top plate can be effectively fixed; and the stability of the tool rack is improved. The top plate30is further provided with first accommodating through holes32that run through the top plate30; and the first accommodating through holes32are used for plugging tools. By the arrangement of the above structure, during use, a user can plug tools and parts such as a screwdriver, a sleeve, a shock absorber and an adapter into the first accommodating through holes. Preferably, the first accommodating through holes may have different sizes, such as a diameter of 12 MM. The first accommodating through holes with this aperture can be used for plugging the tools and parts such as a screwdriver, a sleeve and a shock absorber. The diameters of some of the first accommodating through holes may be 17 MM, so these first accommodating through holes can be used for plugging the tools and parts such as an adapter. The user can plug products with different diameters into the first accommodating through holes with the corresponding apertures according to needs. It is convenient for the user to collect and use various tools and parts. The product convenience and adaptability are improved, and the use experience of the user is enhanced. An upper surface of the base10is provided with first accommodating grooves12. By the arrangement of the above structure, the first accommodating grooves are used for accommodating parts such as a screw and a screw cap. During use, a user can place a part in each first accommodating groove, so that parts such as a screw and a screw cap can be effectively collected, and accidental loss of the parts can be prevented; meanwhile, the user can also conveniently take the parts; the convenience of the tool rack is improved; and the loss of the user can be effectively reduced, and the working efficiency of the user is improved. A magnetic layer50is attached into each of the first accommodating grooves12. By the arrangement of the above structure, the magnetic layer is attached into each of the first accommodating grooves, so that parts such as a screw and a screw cap placed in the first accommodating grooves can be effectively sucked, and the above-mentioned parts are prevented from falling off and are collected and stored more effectively. The stability and convenience of the tool rack are improved, and a user picks and places the parts conveniently. An upper surface of the base10is provided with second accommodating grooves13; and the second accommodating grooves13correspond to the first accommodating through holes32. By the arrangement of the above structure, the second accommodating grooves correspond to the first accommodating through holes. During use, parts such as a reamer and a bearing disassembler pass through the first accommodating through holes and are plugged into the second accommodating grooves; upper and lower portions of parts such as a reamer and a bearing disassembler are respectively abutted against the first accommodating through holes and the second accommodating grooves, so that the parts such as a reamer and a bearing disassembler are placed on the took rack more stably; the stability of the tool rack is improved; the parts such as a reamer and a bearing disassembler on the tool rack are prevented from falling off; and the use experience of a user is enhanced. An upper surface of the base10is provided with third accommodating grooves14. By the arrangement of the above structure, the upper surface of the base is provided with several third accommodating grooves which are used for accommodating parts and tools such as a screwdriver head. During use, a user can replace a screwdriver head at any time according to a need, so as to use a screwdriver tool more conveniently. Different types of screwdriver heads can be plugged in the third accommodating grooves. On the one hand, the screwdriver heads can be conveniently collected; and on the other hand, the user can also be allowed to conveniently use the suitable screwdriver heads, so that the convenience of the tool rack is improved. Third accommodating through holes33are also formed in the top plate30. By the arrangement of the above structure, the third accommodating through holes are used for accommodating tools such as nipper pliers, diagonal pliers and shock-proof pliers. Preferably, the third accommodating through holes may be rectangular. During use, side walls of the tools such as nipper pliers, diagonal pliers and shock-proof pliers are respectively abutted against side walls of the third accommodating through holes, so that the tools such as nipper pliers, diagonal pliers and shock-proof pliers can be stably fixed, and a user can conveniently use these tools. The upper surface of the base10is provided with fourth accommodating grooves15; and the fourth accommodating grooves15correspond to the third accommodating through holes33. By the arrangement of the above structure, heads of the tools such as nipper pliers, diagonal pliers and shock-proof pliers pass through the third accommodating through holes and are abutted against the fourth accommodating grooves, so that the tools such as nipper pliers, diagonal pliers and shock-proof pliers are stressed more uniformly and are placed more stably. The stability of the tool rack can be effectively improved, and the use experience of a user is enhanced. Finally, it should be noted that above embodiments are merely used for illustrating the technical solutions of the disclosure, rather than limiting the disclosure; though the disclosure is illustrated in detail with reference to the aforementioned embodiments, it should be understood by those of ordinary skill in the art that modifications may still be made on the technical solutions disclosed in the aforementioned respective embodiments, or equivalent substitutions may be made to a part of technical features thereof; and these modifications or substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the respective embodiments of the disclosure. | 18,954 |
11858116 | DETAILED DESCRIPTION OF THE DISCLOSURE Socket tools, or simply sockets, are universally used by professional and amateur mechanics and maintenance technicians and come in sets of various size and style. Storing and organizing sockets is a challenge due to their various sizes, shape, and typical numbers in a set. Commercially available socket holder apparatus typically provide a series of individual socket holders in a straight line configuration along a central rail or tool body. The sockets are attached and released by hand, such as by push-on, pull-off action or by half-turns and the like, from a holding post or similar. The sockets held on the socket holders are in close proximity to one another and adjacent sockets can “rattle” or impact one another, especially during transport of the apparatus in a vehicle. Repeated contact eventually results in damage to adjacent sockets such as flaking chrome or coating, scratches and dents and the like. Some socket holders are mounted to move along a rail or tool body without any way to secure the socket holders to specific locations. For larger socket sizes, adjacent sockets bang into one another every time the rail or body is tilted sufficiently to cause the holders to slide and when the rail is rotated to or through a generally vertical orientation. Even on an apparatus having a way to secure the socket holders into selected positions, the holders sometimes come loose by accident, vibration, part failure, or wear, resulting in unwanted and damaging rattling or sliding of adjacent sockets into one another. Secure and spaced positioning of adjacent socket holders on a tool holding apparatus to prevent contact between adjacent sockets is needed. While the sockets are typically marked with identifying information, often by stamping of the exterior surface of the socket cylinder, it can be difficult to read the information, especially where the sockets are positioned in a line where the information can be obscured by adjacent sockets. Friction Socket Holder Assembly FIG.1is an orthogonal view of an exemplary friction socket holder according to aspects of the disclosure.FIG.2is an orthogonal view of the bottom of the exemplary friction socket holder ofFIG.1according to aspects of the disclosure.FIG.3is a detail view of a friction post of the exemplary friction socket holder ofFIG.1according to aspects of the disclosure. The Figures will be discussed jointly. FIG.1shows a friction post tool holder10, more specifically a friction post socket tool holder. The holder10includes a body12having one or more rows14of a plurality of spaced-apart friction posts16for holding a plurality of tools or sockets. Socket Holder Assembly The body12has a base18designed to sit on a relatively flat surface. The base18defines a bottom surface20of the body12. In an embodiment, the bottom surface20of the body12is defined by a generally flat perimeter22as shown. In alternate embodiments, the bottom surface20can define a generally flat planar wall, a contoured surface, a plurality of feet, etc. In an embodiment, as shown, the bottom surface20is made of a non-slip material such as rubber, silicone or the like, including Thermal Plastic Rubber (TPR), Thermal Plastic Elastomer (TPE), or silicone rubber. The non-slip material assists in maintaining the tool holder in a selected position on a surface, particularly a surface which is at an angle to the horizontal, such as on a typical hood, trunk, roof, or other vehicle part, or on a vibrating or moving surface, such as on an idling vehicle or a table supporting an operating power tool or motor or the like. The non-slip bottom surface20can be integrally formed with the body12, attached to the body12by fasteners, adhesives or friction fitting, removably attached to the body12, etc. In an embodiment, the bottom surface20is attached to the body12by a manufacturing process referred to as overmolding. The base18can also include finger holds24allowing for ease of lifting the tool holder10from a surface. The tool holder10loaded with sockets has substantial weight and can be difficult to lift or to “pry” from a flat surface. The finger holds24provide a surface for the user to grasp or lift. Alternately the finger holds24can be apertures in the body12, contours shaped into the body12, or grips of non-slip material attached to the body12. The body12defines at least one platform26for positioning of the held sockets. The platform26is elongate to define a row14of posts16and a row of sockets when in use. A platform26acan define an elevated surface, that is, generally flush with the height of the wall30, as seen in row14ainFIG.1. Alternately, a platform26bcan define a “sunken” or recessed surface, as seen in row14bofFIG.1. Mounted to the platform26can be a platform sheet28, such as a non-slip, embossed, or decorated sheet covering or substantially covering the platform26. Preferably such a sheet is of a soft material so as to not scratch or damage the sockets. The sheet28can be attached to the platform26fixedly, removably, by adhesive or other fastener. In an embodiment, the platform sheet28is attached to the base12by overmolding. In an embodiment, the sheet28is integral with the posts16. The body12can take various shape depending on the types and sizes of tools to be held, the arrangement of held tools, the aesthetics of the holder, etc. The base12as shown includes an opposed front wall30and back wall32, and opposed side walls34. The walls in some embodiments are connected to one another. In some embodiments the walls are generally vertical. In some embodiments, as shown, some or all of the walls can be angled with respect to the vertical. The body12can also include sloped surfaces38aand38bdefined, for example, between the generally horizontally planar surfaces or platforms26aand26b. The sloped surface38a, for example, can form the front wall30or a portion thereof. In other embodiments, a generally vertical front wall30and a sloped surface, such as surface38a, may both be present. The planar surfaces26aand26bcan be at different heights to allow for ease of socket placement and removal, positioning of sockets of different sizes at different levels, separation of sockets of different sizes, types, drive socket shapes, socket heads, or measurement standards (SAE, metric), etc. As seen inFIG.1, the planar surface26bis positioned in a recessed area42. A recessed area may provide additional protection to the sockets from scratches and damage during handling and use of the holder. The holder body12can be made of various materials. In embodiments, the holder body12is made of plastic, such as ABS, nylon, polycarbonate, polypropylene, etc., and can be manufactured using a mold. Such materials and manufacture allow for a wide variety of body shapes and sizes at a reasonable expense. The tool holder10can also include a labelling assembly50. The labelling assembly50includes markings52to convey information about the tools, such as markings indicating socket sizes in SAE or metric sizes. The labels can comprise embossing, etching, silk-screening, engraving or other markings directly onto the body12, such as seen inFIG.1. The labels can be positioned at sloped surfaces38, as shown, for ease of viewing from the front or above the holder. The labels can comprise adhesive labels positioned on the body. FIGS.4A-Care detail views of embodiments of tool labels for permanent or removable attachment to the exemplary friction socket holder ofFIG.1according to aspects of the disclosure. In some embodiments, the labelling assembly50includes one or more labels54attached or attachable to the body12. The labels54can comprise tabs, strips, ribbons, snap-in labels, etc. The labels can be interchangeably attachable to the body12, posts16, platforms26, sloped surfaces38, etc., of the holder10. FIG.4Ashows an embodiment having a plurality of individual labels54aattachable to corresponding individual label panels56defined on the sloped surface38of the holder10. The individual labels54acan be attached removably or permanently. Each individual label corresponds to an individual post16of the holder assembly10. That is, the individual label54ais of a length corresponding to the area associated with a post16and positioned to indicate that the label corresponds to the post. The labels can be attached, for example, by adhesive, friction fit, snap-in, etc. FIG.4Bshows an embodiment having a plurality of individual labels54battachable or removably attachable to the sloped surface38of the holder10. In the embodiment shown, each individual label54bhas one or more snap-in legs58which cooperate with corresponding holes59defined in the surface38. More generally, the labels54can define attachment mechanisms58which cooperate with corresponding attachment mechanisms59defined on the body12. Other attachment mechanisms are known in the art. FIG.4Cshows an embodiment having a longitudinally extending label54chaving a plurality of markings corresponding to a plurality of posts16. The strip label54ccan be attached, removably or permanently, to the holder10such as by adhesive, snap-in assembly, slide-in assembly, tongue and groove, or other mechanisms known in the art. A strip label54c, in strip or ribbon form, may extend the entire length of the platform26or sloped surface38. The strip label54cincludes a plurality of markings52corresponding to a plurality of posts16. Interchangeable strip labels54ccan be provided such that the user can select from the strip labels54caccording to the sizes or types of sockets used with the holder assembly10. For example, multiple strip labels54ccan provide label markings52for SAE or metric sizes. The labels54can attach to the body12by attachment means as known in the art. For example, the labels54can be attached, removably or permanently, by cooperating posts58and holes59, slidable labels and rails60, tongue and groove, snap-on assembly, etc. The labels54can attach to the body such that they are slidable along the length of the body, for example. The user can be provided with a plurality of interchangeable labels54, fixedly or removably attachable to the body12at the user's selection. For example, a kit can be provided having a plurality of labels for SAE and metric measurements, socket type, drive socket type, socket head type, etc. The labels can be color-coded or otherwise visually differentiated. Clip Labels FIG.5Ais a partial orthogonal view of an exemplary embodiment according to aspects of the disclosure showing a socket holder and cooperating “clip” label assemblies.FIG.5Bis a partial orthogonal view of an exemplary embodiment according to aspects of the disclosure showing a socket holder and cooperating “clip” label assemblies. FIG.5Ais a partial orthogonal view of a holder assembly10having two parallel rows14each having a plurality of posts16for holding socket tools with a cooperating clip label assembly comprising a plurality of individual clip members60. Exemplary clip members60cooperate with attachment mechanisms defined on the holder body12. In the embodiment shown inFIG.5A, each clip member60comprises a generally horizontal central plate62having an aperture64extending therethrough. The aperture64cooperates with a coordinating post16, allowing the post to extend through the aperture. In the embodiment shown, the post16includes a columnar shoulder66which fits closely through the aperture64. A friction, snap-on, or other attaching fit can be provided between the columnar shoulder and the aperture. Various shapes of shoulder and aperture can be employed. In an embodiment, the shoulder upper surface68is flush with the central plate62. Each clip member60is removably attachable to the body12. For example, the clip member60can slide on or snap on to the body at cooperating contours, indentations, apertures, etc., defined in the body12. In the embodiment shown, each clip member60slidingly and grippingly engages grooves70defined in a wall30,32or sloped surfaces38of the assembly body12. As shown, the clip member60can have a central plate62, opposing legs72, and flanges74. The central plate62, in the illustrated embodiment, extends across a platform26. The legs72can conform to the sloped surfaces38, recess walls, or other surfaces of the body12. The grooves70are grippingly engaged by the flanges74and the clip member is maintained on the holder assembly10. In an embodiment, the legs72of the clip members are flexible and the clip member is “snapped” into an engaged position by pressing the clip member downward onto the assembly. Alternately, the clip members60can be slidingly engaged onto and removed from the assembly body12. In an exemplary embodiment, the body12defines a cross-section which cooperates with the clip member60, allowing the clip member60to readily slide along the body12at grooves70. An end cap (not shown) can be removably mounted to the assembly body12, allowing clip members60to be slid onto the assembly body12. In embodiments utilizing clip members60which are slidably attachable to the body12, the posts16must be removable from the body, as explained elsewhere herein, such as by unscrewing from the holder or by also slidably attaching to the body. In an embodiment, the clip members are constrained against rotational movement in relation to the assembly such as by interference between opposing legs of the clip member and a wall of the assembly. The clip members60further include displayed markings52corresponding to the sockets held by the posts16. The markings can be positioned on the clip central plate62, leg72, or other surface defined on the clip member60. Alternately, a label plate can be used, similar to those described above herein with regard toFIGS.4A-B. The markings52provide socket identification information, for example, socket size in metric or standard units, and/or socket type, and/or indications for locking and unlocking the socket from the socket holder. The markings on any given clip member can be identical or different to other such markings. Further, the clip members and body can comprise an orientation guide to insure clips are positioned in the correct orientation on the body. For example, as shown, the clip members60have a front leg72which is positioned at an angle corresponding to that of the sloped surface38. The clip members60seen inFIG.5Aare all of a uniform length and abut one another when positioned on the holder body12. In some embodiments the posts16are spaced apart at varying distances to allow for mounting of varying size sockets on the holder. That is, some posts are spaced further apart than others. Similarly, the clip members60can be provided in varying lengths, with longer clip members corresponding to posts spaced further apart. Adjacent clip members60or adjacent socket holder assemblies114can, as seen inFIG.9andFIG.11, abut one another defining a minimum spacing between adjacent, mounted sockets of the same or similar diameter. Socket sets typically have multiple sockets of small diameter and the clip members60each have a length of greater than the socket diameter to maintain spacing between adjacent mounted sockets. However, many socket sets include multiple sockets of relatively larger diameters due to the larger size of fastener for which the sockets are employed. Where larger diameter sockets are mounted on adjacent socket holder assemblies, the disclosure provides a mechanism to maintain sufficient spacing to prevent the larger sockets from knocking together during transport and reorientation of the rail assembly. As an example, a typical small socket base diameter is (approximately one-half inch, which size may be used for a number of sockets for differently sized fasteners. For such sockets, the clip members can have a length of approximately three-quarters inches. A larger diameter socket may have a diameter of one and one-half inches or greater. As an example, a two and one-half inch diameter socket can use a three inch long clip member. For such sockets, clip members are provided having lengths greater than the diameter of the designated socket. InFIG.5Ba single lengthy clip member60is provided having a plurality of apertures defined therethrough corresponding to the plurality of posts16. The lengthy clip member60has similar parts as described above such as a central plate62, apertures64, legs72, etc. Attachment of the single lengthy clip member is similar to that described above with respect to the plurality of smaller clip members and will not be described here again. The lengthy clip member can have a plurality of markings52corresponding to the plurality of socket posts16. The user can be provided with a plurality of interchangeable clip members60, fixedly or removably attachable to the body12at the user's selection. For example, a kit can be provided having a plurality of labels for SAE and metric measurements, socket type, drive socket type, socket head type, etc. The labels can be color-coded or otherwise visually differentiated. Sockets and Posts Socket wrenches, ratchets and other driving devices typically come with square drive heads which fittingly receive any of a corresponding set of sockets with similarly sized drive sockets. A socket typically has a socket head for receiving a fastener and a drive socket for receiving the drive post of the wrench, ratchet or other driving device. The socket head defines a fastener-shaped hole for receiving the head of a fastener. For example, a hex (hexagonal) head socket will drive a hex head fastener of the same size. The drive socket of the socket defines a hole for receiving the drive post of the drive device, such as a ratchet wrench. For square posted drive devices and drive sockets, standard sizes are typically one-quarter inch, three-eighths inch, and one-half inch square. (E.g., a “quarter inch drive socket”.) Larger sizes are rarer but include standard sizes of three-quarter, one, and one and a half inches square. For a set of sockets having a given size drive socket, multiple sockets are provided for various sized fasteners. For example, a quarter inch drive socket set might include thirteen sockets having a range of sizes and shapes for different fasteners. InFIG.1, a holder assembly10is provided with a row14aof posts16labelled and spaced for a set of thirteen SAE sockets having socket heads ranging in size from one-quarter inch to one inch. (For smaller sockets, the posts16can be spaced closer together obviously without adjacent sockets touching each other.) The row14bprovides thirteen posts labelled and spaced for use with thirteen metric size sockets ranging from size 7 to 19. The tool holder10can be provided in various lengths with various numbers of posts16and with various spacing between the posts16to provide for mounting of corresponding numbers of sockets. Further, additional rows14can be provided in alternate embodiments. Additionally, socket wrenches and drive devices are available having a “spline drive.” A spline drive uses a drive post with multiple splines (e.g., six) defined along the length of the drive post. The corresponding sockets obviously have splined drive socket holes for use with the splined drive post. Typical sized sockets weigh between around 10 and 40 grams, although the weights depend on the socket material, the depth of the socket, the socket type, etc. For example, impact sockets are thicker walled and weigh more than standard sockets. Deep sockets are longer than standard “shallow” sockets and consequently weigh more. Some larger and smaller sockets are available and will weigh more or less. FIG.6illustrates a cross-sectional view of a post16having six splines82with an overlay outline of the square drive hole wall90and socket exterior wall92of a socket tool showing six contact points86between the socket and post. Since the holder posts16hold the sockets by friction fit, the posts16are slightly larger in dimension than the corresponding drive socket hole. The posts16are made of a flexible material which elastically yield, flex or “give” when pressing the socket onto the post and which apply an outward force against the walls of the drive socket hole, thereby holding the socket onto the post. The posts16can take various shape in cross-section. For example, the posts can be square, hexagonal, octagonal, round, etc. in cross-section. Square posts, however, may make it difficult to fit a square holed socket onto the post. The square socket hole would need to be rotationally aligned with the post, for example. The same is true for an octagonal post, for example. A cylindrical post would provide only four contact points with the walls of the square hole in the socket. In one embodiment, the posts16have a central body80which is splined, as shown, having a plurality of longitudinal splines82running the height of the post16. A splined post16can be especially useful for use with square drive sockets. In the embodiment shown, the post16has six splines82, which can be said to roughly define a hexagon when the tips of the splines are connected by imaginary lines Similar posts having fewer or more splines can also be used. The post surfaces84between the splines can, for example, define a cylinder, hexagon, etc. The post surfaces between the splines do not contact the socket in use. One benefit of having six equally spaced splines82is that such a post provides for six points of contact86with the drive hole wall90of a square socket drive while not requiring rotational alignment between the socket and post. A columnar post16(with circular cross-section), for example, would provide four points of contact86with a square socket drive hole wall90. A square-column post16(with a square cross-section) would provide contact with the square drive hole wall90along its entire perimeter, but it would require rotational alignment of the socket and post. That is, the user would have to rotate the socket to the proper orientation to position the socket on the post. A four splined post would have the drawback of either requiring rotational alignment of socket and post or requiring spline diameters of greater size than the corner-to-corner dimension of a square drive hole. An eight splined post design results in unused splines (not contacting the socket), or requiring different dimensions from spline to spline, and rotational alignment. In some embodiments the posts16are made of Thermal Plastic Rubber (TPR) or Thermal Plastic Elastomer (TPE). Alternate materials include silicone rubber. These materials provide resiliency and elasticity while also relatively easy for a user to force These materials are also resistant to chemical breakdown upon exposure to common but corrosive fluids such as brake cleaner and transmission fluids. In some embodiments, the friction fit between a post16and positioned socket I such that the entire holder assembly10can be held upside down and the socket will not disengage from the post. The post is made of a material, as described, for providing a high friction between post and socket. Further, the post is sized and shaped to provide a solid friction fit between post and socket. Further, the post is made of (or covered in) a suitable elastic material to deform when the socket is positioned on the post and to then provide a positive elastic force against the socket. In some embodiments, a holding force of greater than 10 grams is provided by the fit between the friction post and the socket. In some embodiments, a holding force of greater than 10 grams is provided by the fit between the friction post and the socket. In some embodiments, a holding force of greater than 400 grams is provided by the fit between the friction post and the socket. In some embodiments, the friction fit force is great enough to allow the entire assembly, loaded with sockets, to be held by grasping only a single socket positioned on a post. Overmolding Overmolding is a manufacturing technique using consecutive moldings to create a monolithic item. For example, a single item is created by manufacturing a first part (a substrate) of a first material and then “molding over” the first part with a second material to create the unified single part. The substrate can be a machined metal part, a molded plastic part, etc. The substrate is partially or fully covered by the subsequently applied overmold materials which are injection molded into a mold tool formed around the substrate. When the overmold material cures or solidifies, the two materials become joined together as a single item. The resulting continuous item is composed of chemically bonded and often mechanically interlocked materials of different types. Overmolding materials can be plastic, rubber, Thermal Plastic Rubber (TPR) or Thermal Plastic Elastomer (TPE), for example. In some embodiments, the friction post socket holder is manufactured using overmolding techniques. InFIG.2, a bottom view of the friction post socket holder10shows signs and results of an overmolding process. The holder body12is made of a plastic material, and can be made by injection molding in some embodiments. The plastic material of the body12can be relatively hard and unyielding and therefore not suitable for a soft perimeter22for contacting a surface (e.g., a painted surface of a vehicle). Further, the plastic can be unyielding and non-elastic and so not suitable material for the friction posts16. In the embodiment shown, the relatively softer perimeter22, the posts16(or outer surfaces thereof), and platform sheets28are made of TPR, TPE or the like, and are overmolded onto the body12. Using the overmold technique, the holder10parts (first molded underlay and second molded overlay) are chemically and physically locked together. The perimeter is both chemically bonded to the body and mechanically interlocks with the body. For example, the perimeter22has interlocking tabs94which cooperate with notches defined in the body12. Further, the platform sheets28and posts16are overmolded onto and into the body12. The surface sheets28are chemically bonded to the underlying platforms26of the body. The sheets28are also mechanically interlocked with the body where, for example, overmold material columns96cooperate with corresponding apertures in the body12. In an embodiment, the posts16are entirely made of overmolded material. In another embodiment, the posts comprise a harder substrate covered by a softer overmold material. Overmolding insures that the perimeter22, sheets28and posts16do not separate or detach from the body12, either entirely or at random points between the overmold and substrate. The resulting holder10is of solid, unitary construction, and is tough and reliable. Use of appropriate overmold materials provides a soft, gripping layer for contacting ferrous surfaces and chrome plated sockets which are prone to scratching. Further, the overmolding allows for a suitably flexible and resilient material to form or overlay the posts16. Finally, the overmold process eliminates assembly parts such as fasteners, potentially reducing or eliminating fastener costs, scratching of sockets and surfaces by fasteners, machining time and costs for the holder body, and assembly time and costs for the holder generally. The overmolding also allows for colorful aesthetics (since the substrate and overmold can be of different colors). Modular Post Assemblies FIG.7is a cross-sectional orthogonal view of a modular friction socket holder post assembly having a plurality of removable post units according to aspects of the disclosure.FIG.8is a partial orthogonal view of a modular friction socket post assembly according to aspects of the disclosure.FIG.9is a detail cross-sectional view of the modular friction socket post assembly ofFIG.7according to aspects of the disclosure.FIGS.7-9are generally discussed together to provide an understanding of the operation of the apparatus. An apparatus100for releasably holding by friction fit posts16a plurality of socket tools includes a rail assembly112and plurality of socket holder assemblies114which slidably and removably engage the rail assembly112. The exemplary rail assembly112defines a generally U-shaped channel122having a bottom wall116, opposing side walls118, and opposing flanges120. Exemplary socket holder assemblies114slidably engage the rail assembly112as shown. The holder assembly114includes a post16and a base member132. The base member132cooperates with the rail assembly112. FIG.7shows an exploded view of a socket holder assembly114having a base member132and a friction post16mountable to a tab134defined on the holder assembly base member132. Alternately, the post can be defined on or formed monolithically with the base member132. InFIG.8, an embodiment is shown wherein the post16is mounted to the base member132by a threaded shaft136and cooperating threaded hole138in the base member132. Assembled socket holders are also seen inFIGS.7-9, positioned on the rail assembly with the base member132engaging the channel122and the posts16extending upwardly out of the channel. In an exemplary embodiment of a socket holder assembly114, the base member132engages the channel22. The base member132is of a size and cross-section to slidingly engage the rail assembly channel122. Flanges140defined on the base member132cooperate with, slide within and maintain the holder assembly114in the channel22. More particularly, the flanges140of the base member132slide into and engage the corresponding grooves142defined by the rail assembly walls116,118and flanges120. The bottom surface of the base member132may include friction (or anti-friction) features133to reduce (or increase) the force required to slide the socket holder assembly along the rail assembly. As seen inFIGS.8-9, the rail assembly is shown removed from the tool organizer body and is attachable to the tool organizer body. Alternately, the rail assembly can be formed monolithically with the tool organizer body. In the embodiment seen inFIG.8, the assembly further includes a plurality of clip members160. The socket holder assembly114defines a mounting post16and a columnar shoulder66. A clip member160cooperates with the socket holder assembly114and rail assembly112. In the embodiment shown inFIG.8, the clip member160comprises a central plate162defining an upper surface and an aperture164defined therethrough for cooperating with the columnar shoulder66of the post16. Socket markings52are provided on the clip. In an embodiment, the columnar shoulder upper surface168is flush with the upper surface of the central plate162. Each clip member160slidingly and grippingly engages grooves190defined in the exterior surfaces of the side walls192of the rail assembly body14in some embodiments. The clip member160has central plate162, opposing legs172, and flanges174. The central plate162, in the illustrated embodiment, rests on the base member132of the socket holder assembly114. The grooves190are slidably engaged by the flanges174and the clip member is maintained on the rail assembly by engagement between the grooves190and flanges174. In an embodiment, the legs of the clip members are flexible and the clip member is “snapped” into an engaged position by pressing the clip member downward onto the rail assembly. Alternately, the clip members can be slidingly engaged onto and removed from the rail assembly. In an embodiment, the clip members are constrained against rotational movement in relation to the rail assembly. The clip member is constrained against rotational movement in relation to the rail assembly by interference between opposing legs of the clip member and at least a side wall of the rail assembly. Adjacent clip members or adjacent socket holder assemblies can abut one another defining a minimum spacing between adjacent, mounted sockets of the same or similar diameter. As described elsewhere herein, sockets come in varying diameters. Consequently, in some embodiments, the socket holder assemblies114can be provided in varying lengths to accommodate the varying sizes of socket. Similarly, the clips can be a varying length. In some embodiments, the rail assembly, socket holder assembly, and/or clip assembly can further includes orientation guides for proper orientation of these assemblies with one another. An orientation guide may require a base member132, and therefore socket holder assembly60, to be inserted into the interior channel122at a specified orientation. Thus, a set of socket holder assemblies would “face the same way” in the channel. For example, cooperating orientation mechanisms can be used on alternate assemblies. For example, one of the grooves190can employ an alternate profile which cooperates with a flange140of corresponding profile, thereby requiring orientation of the base member132in a specified orientation with respect to the rail assembly. Similar mechanisms can be used to orient the clips on the rail assembly. Magnetic Plates FIG.10is an orthogonal exploded view of an embodiment of the friction post socket holder having a magnetic panel for attachment to a ferrous surface according to aspects of the disclosure.FIG.11is an orthogonal view of an embodiment of the friction post socket holder having a magnetic panel for attachment to a ferrous surface and a magnetic panel for securement of socket tools according to aspects of the disclosure. The magnetic back plate assembly200is attached to the assembly body12, by friction fit, adhesive, fasteners, slide-in assembly (e.g., tongue and groove), a picture-frame assembly, or as otherwise known in the art. In the illustrated embodiment, the magnetic back plate200is mounted to the holder body12. The magnetic back plate assembly200is, in the shown embodiment, comprises a plurality (two) of magnetic panels202. The magnetic back plate assembly allows the holder assembly10to be securely positioned on any suitable ferrous surface. InFIG.11, additional magnetic tool mounting plates204are provided and positioned on the body12at or as the surfaces28. Hence the sockets, when positioned on the holder assembly10, are maintained in position by the friction fit of the posts16and the magnetic force of the plates204. While the making and using of various embodiments of the present disclosure are discussed in detail, it is appreciated that the present disclosure provides many applicable concepts that may be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the disclosure. Only the claims appended hereto delimit the scope of any claimed inventions. | 34,739 |
11858117 | DETAILED DESCRIPTION OF THE INVENTION The applicant emphasizes that for the content of this specification, including the embodiments and the claims described in the following, relevant directional terms shall refer to the directions shown in the drawings in principle. In addition, for the embodiments and drawings described in the following, identical component signs refer to identical or similar components or structural features. As shown inFIG.1toFIG.7, according to a first embodiment of the present invention, the dual-purpose mechanics creeper comprises: A flat board10of an elongated shape, having a top surface11and bottom surface12. The bottom surface of the flat board includes a first pivotal connecting base13and a second pivotal connecting base14, one side of each one of the pivotal connecting bases includes a locking member15. To increase the comfort during use, the top surface11can be further installed with a foam or a soft pad. A first swing arm20comprises: a longitudinal shaft21, a lateral shaft22and two rollers23. The first swing arm20uses one end of the longitudinal shaft21to pivotally attach to the first pivotal connecting base13of the bottom surface of the flat board, the lateral shaft22is arranged at another end of the longitudinal shaft21, and the two rollers23are arranged on the lateral shaft22. The first swing arm20is able to move between a first position adjacent to the bottom surface12of the flat board and a second position away from the bottom surface12of the flat board. One side of the longitudinal shaft21includes an engagement portion24. In this embodiment, the longitudinal shaft21is divided into a upper section and a lower section. The lower section211is inserted into the upper section212, in order to allow the lower section211to move relative to the upper section212, thereby changing an overall length of the longitudinal shaft. A securement structure213can be arranged between the upper section212and the lower section211in order to engage with and secure the upper section and the lower section. In this embodiment, the upper section and the lower section includes through holes. The securement structure is inserted into the through holes formed on the upper section and the lower section in order to achieve locking and securement; however, the present invention is not limited to such securement method only. The present invention may also other securement methods, such as bolts or clamp, to achieve the effect of securing the upper section and the lower section. A second swing arm30comprises: a longitudinal shaft31, a lateral shaft32and two rollers33. The second swing arm30uses one end of the longitudinal shaft31to pivotally attach to the second pivotal connecting base14of the bottom surface of the flat board, the lateral shaft32is arranged at another end of the longitudinal shaft31, and the two rollers33are arranged on the lateral shaft32. In addition, the lateral shaft32further includes an engagement member34arranged thereon. The second swing arm30is able to move between a first position adjacent to the bottom surface12of the flat board and a second position approaching the first swing arm20. When the second swing arm30is at the first position, the lateral shaft32is adjacent to the bottom surface12of the flat board. When the second swing mar30moves to the second position, the engagement member34on the lateral shaft32is able to correspondingly engage with the engagement portion24of the first swing arm. In this embodiment, one side of lateral shaft32of the second swing arm comprises two supporting columns35. A separation distance between the two supporting columns35is equivalent to the width of the longitudinal shaft21of the first swing arm, in order to increase the stability of the attachment between the first swing arm and the second swing arm. The engagement member34is arranged on one of the supporting columns35. In this embodiment, the engagement member34is an insertion pin; however, the present invention is not limited to such type only. The two rollers40are arranged at one end of the bottom surface12of the flat board. When the structure of the present invention is used as a horizontal flat mechanics creeper, the first swing arm20and the second swing arm30are located at the first position. In other words, the first swing arm20and the second swing arm30are located at positions adjacent to the bottom surface12of the flat board, as shown inFIG.1andFIG.2. Under such condition, the dual-purpose mechanics creeper can be flatly placed on the floor for use, and the rollers23of the first swing arm, the rollers33of the second swing arm and the two rollers40can be used to move on the floor, in such a way similar to a conventional horizontal flat mechanics creeper. To further increase the stability between the first swing arm20and the second swing arm30, the locking member of the first pivotal connecting base13is correspondingly locked inside a through hole25of the first swing arm, and the locking member of the second pivotal connecting base14is correspondingly locked inside a through hole36of the second swing arm30. When there is a need to erect the first swing arm20for use, the user can simply adjust the first swing arm from the first position to the second position, as shown inFIG.3. At this time, the longitudinal shaft21of the first swing arm is able to move from the original position adjacent to the bottom surface12of the flat board to the vertical standing position, and an angle is formed with the bottom surface12of the flat board, and such angle is preferably to be 90 degrees. Next, the second swing arm30is adjusted from the first position to the second position. In other words, the second swing arm30is moved from the original position adjacent to the bottom surface12of the flat board to the position intersecting with the first swing arm20. At this time, the longitudinal shaft21of the first swing arm is located between the two supporting columns35of the second swing arm, and the engagement member34of the second swing arm is correspondingly engaged at the engagement portion24of the first swing arm, as shown inFIG.5. The second swing arm30can then be obliquely supported between the bottom surface12of the flat board and the first swing arm20, as shown inFIG.4. Accordingly, the dual-purpose mechanics creeper can be transformed from the original use state of horizontal flat type on the floor to the use state of vertical standing type for use, thereby achieving the objective of dual-purpose of use for the repair of driver's seat. FIG.8shows a second embodiment of the present invention. The structure of the second embodiment is similar to the structure of the first embodiment, and the difference relies in that the bottom surface12of the flat board does not include two rollers40. Although the quantity of the rollers installed at the bottom surface of the flat board may affect the supporting strength and stability of the flat board during the use under the horizontal flat state, the use of the rollers of the first swing arm and the rollers of the second swing arm as the support without installation of rollers on the bottom surface of the flat board can also achieve the same effect. FIG.9toFIG.11show a third embodiment of the present invention. Similar to the first embodiment, the third embodiment of the present invention comprises: a flat board10, and a first swing arm20comprises: a longitudinal shaft21, a lateral shaft22and two rollers23. A second swing arm30comprises: a longitudinal shaft31, a lateral shaft32and two rollers33, and two rollers40. The difference between the third embodiment and the first embodiment relies in that the longitudinal shaft21is divided into a upper section and a lower section. The lower section211is inserted into the upper section212, in order to allow the lower section211to move relative to the upper section212, thereby changing an overall length of the longitudinal shaft. A securement structure213can be arranged between the upper section212and the lower section211in order to engage with and secure the upper section and the lower section211. In this embodiment, the securement structure213is a clamp, and the clamp comprises two clamping arms respectively arranged at two sides of the upper section212, and the lower section211includes a plurality of corresponding locking slots214formed thereon, thereby allowing the two clamping arms to lock at the corresponding locking slots opposite from each other. Such clamping method can be more easily operated in comparison to the aforementioned insertion pin method. Moreover, for secure the first swing arm20and the second swing arm30on the flat board10tightly. Two secure members50and51are provided on the first swing arm20and the second swing arm30individually. There are two secure holes52and53provided on the bottom surface12of the flat board10corresponding to the position of the secure member51and secure member51. In this embodiment, these two secure members are bolts and the secure hole are screw holes. | 9,064 |
11858118 | DESCRIPTION OF EMBODIMENTS To make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the following further describes the technical solutions of the embodiments of the present invention in detail with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely some but not all of the embodiments of the present invention. FIG.1shows a companion robot and a system architecture of a usage environment of the present invention. The usage environment inFIG.1is applicable to any scenario (for example, a community, a street, an administrative district, a province, a country, a transnational scenario, or even a global scenario), and includes the following units: a family or child-care institution301including at least one child303, a child interaction robot302, and at least one indoor radio access network304; a parent306(father or mother, an immediate family member, another guardian, or the like) and a portable intelligent terminal305of the parent; an outdoor radio access network307that provides a remote wireless network access service for the intelligent terminal305; a child-care service institution316that provides a professional data service for a child-care service and that includes a child-growth cloud server317, a child-growth model database308, and a child-care knowledge base309; a social public service institution310that provides government public data support for the child-care service, including but being not limited to weather forecast, a list of healthcare facilities, epidemic situation information, an emergency notice, and the like, where the social public service institution310includes a social public service cloud server311and a social public service cloud database312; and at least one third-party network cloud service institution312that provides a refined professional network cloud data service such as instant messaging, a child-care service social application, a network audio/video service, online shopping, payment and logistics tracking, or comments on and votes for a community and a medical institution and that includes a third-party network service cloud server314and a third-party network service cloud database315. The system architecture of the usage environment further includes the Internet320that is used by a network operator to provide a network service. A product shape implemented in the embodiments of the present invention is shown as400inFIG.2, and includes: a touch display screen401, configured to display graphic image information to a target object, and receive a touch control signal from a user; a speaker module407, configured to provide a sound output signal for the target object; a microphone array and sensor module402, configured to detect a feature of the target object, such as sound, an expression, or a behavior; a start/pause/emergency button403, configured to provide a simple operation instruction for the target object, and respond to an interrupt instruction of the user in an emergency case; and a processing and operation module404, configured to calculate and output a control instruction to a child-care robot based on a user status signal that is input by the microphone array and sensor module402, a user operation instruction of the button403, guardian request information of a cared child from a network, a service instruction of a child-care service institution from the network, third-party network cloud service data, and the like. The child-care robot outputs sound, an image, a body action and movement, and the like. The child-care robot further includes a crawler-type/wheel-type mobile mechanical apparatus405and a mechanical arm406. In one embodiment, a feasible product shape is a robot.FIG.3shows a feasible implementation of the core component “processing and operation module”404of the robot, including a mainboard510and another peripheral functional component. Both a sensor module501and a button502are connected to an I/O module of the mainboard510. A microphone array503is connected to an audio/video encoding/decoding module of the mainboard510. A touch display controller of the mainboard510receives touch input of a touch display screen504, and provides a display drive signal. A motor servo controller drives a motor and encoder507based on a program instruction, and drives the crawler-type/wheel-type mobile mechanical apparatus405and the mechanical arm406, to form movement and body languages of the robot. Sound is obtained after output of the audio/video encoding/decoding module is pushed to a speaker508by using a power amplifier. A hardware system further includes a processor and a memory on the mainboard510. In addition to an algorithm, an execution program, and a configuration file of the robot, the memory also records audio, video, image files and the like that are required when the robot performs caring, and further includes some temporary files generated during program running. A communications module of the mainboard510provides a function of communication between the robot and an external network, and may be a short-range communication module such as a Bluetooth module or a WiFi module. The mainboard510further includes a power management module, configured to implement battery charging and discharging, and energy saving management of a device by using a connected power system505. The processor is a most core component, has operation and processing capabilities, and manages and controls cooperation with another component. The sensor module501of the robot detects and collects sensing information of a companion object of a target object and emotion information of the target object that is obtained when the target object interacts with the companion object. The sensing information includes at least one of view information and voice information, and the emotion information includes at least one of view information and voice information. Audio, a video, or an image may be captured by a camera, and the detection and collection may alternatively be completed by another sensor or may be completed through cooperation with another sensor. The processor extracts an emotion feature quantity based on the emotion information, determines, based on the emotion feature quantity, an emotional pattern used by the target object to interact with the companion object, determines, based on the emotional pattern, a degree of interest of the target object in the companion object, extracts behavioral data of the companion object from the sensing information based on the degree of interest, screens the behavioral data to obtain simulated object data, and generates an action instruction based on the simulated object data. A behavior execution module is configured to receive the action instruction of the processor and interact with the target object. The behavior execution module may include components that can interact with the outside, such as the crawler-type/wheel-type mobile mechanical apparatus405, the mechanical arm406, the touch display screen401, and a microphone. Further, in another embodiment, the processor of the robot has only a simple processing function, and the simulated object data is processed by a service server. A communications module is further disposed on the robot, and communicates with an intelligent terminal and the like by using an antenna and the service server. The communications module sends, to the service server, the sensing information of the companion object of the target object and the emotion information of the target object that is obtained when the target object interacts with the companion object, and receives the simulated object data sent by the service server. Then, the processor obtains the simulated object data, and generates the action instruction based on the simulated object data. A memory is further disposed on the robot, and the memory stores a simulated object database to record the simulated object data. Referring toFIG.4,FIG.4is a flowchart of a method for interaction between a robot and a target object according to an embodiment of the present invention. Descriptions are provided by using an example. For example, the target object is a child. Block S101. Detect and collect sensing information of a companion object of the target object and emotion information of the target object that is obtained when the target object interacts with the companion object. The sensing information includes at least one of view information and voice information, and the emotion information includes at least one of view information and voice information. A camera may be started by using a machine, to monitor daily life of the child, monitor an expression, heartbeats, an eye expression, and the like of the child, determine an emotion of the child, and further capture an image at a moment corresponding to the emotion to obtain emotion information of the child. The robot may capture an image or a video at a current moment based on a child behavior (an expression, an action, or the like). The captured image may be one image, or may be several images, a video in a period of time, or the like. Image content may include the child behavior, an ambient environment, an event of interest to the child, and the like. The captured image may be locally stored on the robot, or may be uploaded to a cloud server. Block S102. Extract an emotion feature quantity based on the emotion information, determine, based on the emotion feature quantity, an emotional pattern used by the target object to interact with the companion object, determine, based on the emotional pattern, a degree of interest of the target object in the companion object, extract behavioral data of the companion object from the sensing information based on the degree of interest, and screen the behavioral data to obtain simulated object data. The simulated object data is used by the robot to simulate the companion object, and the simulated object data is used to describe the companion object. It may be considered that the simulated object data is digital human data or a digital human resource. When the simulated object data is obtained, a digital human image can be obtained from the data. Further, in an embodiment, the screening the behavioral data to obtain simulated object data may be: screening the behavioral data to extract a behavioral key feature, and generating the simulated object data by using the key feature. The behavioral data includes a body action, the behavioral key feature includes a body key point or a body action unit, and the key feature is generated through statistical learning or machine learning; or the behavioral data includes an expression, the behavioral key feature includes a partial face key point or a facial action unit, and the key feature is generated through pre-specification or machine learning; or the behavioral data includes a tone, the behavioral key feature includes an acoustic signal feature in voice input of the companion object, and the key feature is generated through pre-specification or machine learning. For example, a method for extracting a visual feature from the sensing information (for example, the video or the image) is as follows: 83 key feature points of a face are first tracked by using a Bayesian shape model method with constraints, and then a three-dimensional (3D) rigid motion of a head and three-dimensional flexible facial deformation are estimated by using a minimum energy function method. For a formed three-dimensional grid image, seven action unit vectors (AUV) are used: AUV6-eye closing, AUV3-eyebrow drooping, AUV5-outer eyebrow raising, AUV0-upper lip raising, AUV2-lip stretching, and AUV14-labial angle drooping. Each AUV is a column vector including coordinate displacements of all grid vertices of a unit. While a video sequence is input through fitting by using a Candide-3 facial model, animation parameters of these AUVs may also be obtained. Therefore, for each image in the video, seven-dimensional facial animation parameters are finally obtained as visual emotional features. Emotional feature dimension reduction includes a linear dimension reduction method such as principal component analysis (PCA) and linear discriminant analysis (LDA), and a non-linear manifold dimension reduction method such as Isomap and local linear embedding (LLE), so that a feature in low-dimensional space better maintains a geometrical relationship of the feature in high-dimensional space. A theoretical method of continuous emotion description space indicates that in continuous emotion description, it is considered that different emotions change gradually and smoothly, and an emotional status is in a one-to-one correspondence with a space coordinate point with a specific quantity of dimensions. Relatively common continuous emotion description models include an emotion wheel theory and a three-dimensional arousal-pleasure-control degree description. The emotion wheel theory considers that emotions are distributed in a circular structure. A structure center is a natural origin, that is, a state with various emotional factors. However, these emotional factors cannot be reflected due to extremely weak strength at this point. The natural origin extends in different directions to manifest different emotions, and levels of emotions of a same type are further classified as emotional strength changes. In addition, a strength change in emotions of a same type is used as a third dimension for description, and an emotion wheel concept is extended to a three-dimensional space. Based on the description of a two-dimensional (2D) emotion space and an emotion wheel, an emotion-related feature in a video is matched with the space, so that emotions can be effectively described or classified. The extracted feature is matched with a visual emotion feature database, for example, a Cohn-Kanade video emotion database, to identify a corresponding emotional feature of the child. A thing is extracted from the image or the video captured by the robot, and an object that interests the child is identified by using the emotional feature, to generate the simulated object data. The robot simulates data about the object based on the simulated object data and then interacts with the child. The thing may be extracted by using an existing image/voice recognition algorithm. An operation may be locally performed by the robot, or the image or the video may be uploaded to the server and the server performs an operation. Content that the child is watching, a person who interacts with the child, or the like may be extracted. An expression, a voice, an action, and the like of the person who interests the child and interacts with the child are extracted. The robot obtains appropriate data through learning, to interact with the child. For the person (a companion object B) who interests the child, the robot obtains conversation content, a body action, an expression, and a tone of the companion object B. The robot generates, through machine learning and training performed on the body action, the expression, and the tone of the companion object B, a model used for interacting with the child. Expression interaction is used as an example, and may specifically include: collecting an expression of a first object when a child A shows interest; extracting each facial action of an expression that interests or does not interest the child; classifying, by using a classification algorithm such as SVM (support vector machine), RF (random forest), or deep learning, the facial actions into a facial action that interests the child or a facial action that does not interest the child; selecting, for expression synthesis of the robot, the facial action that interests the child; and interacting, by the robot, with the child by using a learned expression. In one embodiment, facial expression data may be extracted and learned. For example, there are 14 groups of facial actions, including: inner eyebrow raising, outer eyebrow raising, eyebrow drooping, upper eyelid raising, cheek raising, eyelid contraction, eyelid tightening, nose raising, upper lip raising, angulus oris pulling, angulus oris contraction, lower angulus oris raising, mouth pulling, mouth opening, and chin drooping. Voice interaction is used as an example, and may include: collecting a voice signal of a first object when a child A shows interest; extracting each acoustic signal of the voice signal that interests the child A; collecting statistics about a feature of an acoustic signal that interests the child A; synthesizing a robot voice by using the feature of the acoustic signal that interests the child A; and interacting, by the robot, with the child by using a learned voice. In one embodiment, acoustic data including information such as a fundamental frequency, a speaking speed, and a ratio of unvoiced sound to voiced sound may be extracted and learned. For example, a fundamental frequency signal is obtained by calculating a sum of fundamental frequencies of all voiced frames and then dividing the sum by a quantity of the voiced frames. In different emotional states, three statistical parameters: an average, a range, and a variance of fundamental frequencies have extremely similar distribution trends. Surprise has a greatest fundamental frequency average, followed by delight and anger, and sadness has a lowest fundamental frequency average. The ratio of unvoiced sound to voiced sound is a time ratio of a voiced segment to an unvoiced segment. Delight, anger, and surprise have a slightly higher ratio of unvoiced sound to voiced sound than calmness, and calmness has a slightly higher ratio of unvoiced sound to voiced sound than fear and sadness. The speaking speed is represented by a ratio of a word quantity to voice signal duration corresponding to a sentence. Speaking speeds in cases of anger and surprise are the highest, followed by delight and calmness, and speaking speeds in cases of fear and sadness are the lowest. Therefore, different emotions can be identified by using the foregoing acoustic signal. Body action interaction is used as an example, and may specifically include: collecting a body action of a first object when a child A shows interest or shows no interest; extracting each body action unit in a case of an expression that interests or does not interest the child; classifying, by using a classification algorithm such as SVM, RF, or deep learning, body action units into a body action unit that interests the child and a body action unit that does not interest the child; selecting, for body action synthesis of the robot, the body action unit that interests the child; and interacting, by the robot, with the child by using a learned body action. In one embodiment, body action data may be extracted and learned. For example, there are 20 groups of action units, including: body leaning forward, head swing, nodding, head shaking, hand raising, hand clapping, grabbing, walking, squatting, and the like. There are 35 key points, including heads (4), thoracoabdominal parts (7), and arms (6 on each side, and 12 in total), and legs (6 on each side, and 12 in total). A picture/video in a film that interests the child is taken. The robot obtains appropriate data through learning, to interact with the child. Further, in daily life, the robot detects and collects behavior information of the child, and a manner used herein may be the same as the foregoing manner of collecting the emotion information of the child. To be specific, a same detection and collection process is used, or there is a same collection source. In addition to determining an emotion of the child, and learning about a companion object of the child, the robot may further analyze the collected information to determine a current status of the child, and determine a current interaction scenario, for example, whether the child is currently playing alone or is currently accompanied by a parent. The robot may select, from a simulated object database based on the current interaction scenario, simulated object data used in current interaction, and simulate a corresponding companion object based on the simulated object data used in the current interaction, to interact with the child. For example, if the child currently says that the child misses his/her mother but his/her mother is absent, the robot may simulate the mother based on simulated object data that is generated through previous learning about the mother and that corresponds to the mother, to interact with the child. Alternatively, in a process in which the child interacts with the parent, when the child shows interest in specific knowledge or a specific phenomenon, the robot may select related simulated object data to simulate a corresponding companion object, to interact with the child. The server or the robot obtains, through analysis based on the received picture/video in the film, a name of the film watched by the child, and obtains, through analysis based on an action picture/video/voice of the child, whether the child likes a figure in the film, so that the server or the robot obtains the name of the film that the child is watching, a name of an idol of the child, and even a fragment of the idol of the child. For example, the robot obtains, through analysis, that the child is fond of watching “Frozen”, and likes the princess Elsa. The server queries idol information on the Internet based on the film name and idol name information, to model the idol based on the idol information, so that the robot can simulate the idol that interests the child. Data processing of an object simulated by the robot: An object that interests the child may be stored in a local database of the robot. For an object that does not interest the child, a positive thing that is suitable for an age of the child is selected, and is played or simulated to the child for watching. Images captured in different expressions of the child are operated in different manners. When the child shows expressions of delight, surprise, and the like, it indicates that the child is interested in current things, but the things are not necessarily suitable to the child. In this case, appropriate data needs to be selected for interaction with the child. When the child shows expressions of anger, disgust, and the like, it indicates that the child does not like current things, but the current things may be beneficial to growth of the child. In this case, the robot needs to interact with the child by using data of the things, to guide the growth of the child. For example, for a thing that interests the child, it is determined whether the thing is historically interested. If the thing is historically interested, the robot may directly search the local database for related data, selecting data that matches the age of the child, and then interact with the child. For example, when it is detected, from an image, that the child is reading “The Little Prince”, the robot searches the local database for data related to “The Little Prince”. If the robot can find content, it indicates that “The Little Prince” is historically interested, and the robot may directly play or simulate data (illustrations, animated videos, story voices, and the like of “The Little Prince”) in the local database to the child for watching. If a thing appears for a first time (there is no related information in the local database), the robot needs to determine impact exerted on the child by the thing, and selects positive information. A specific method may be: obtaining a material or an introduction of a thing through searching by using a network server, and determining a feature of the thing. For example, when it is detected, in an image, that the child is watching an animated film “Conan”, and the robot finds, by using the network server, that this film has some violent content that is not suitable for a child under 6 years old, the robot ignores the content. When it is detected, in an image, that the child is watching an animated film “Pleasant Goat and Big Big Wolf”, and the robot finds, by using the network server, that this film is suitable for a child under 5 years old, the robot downloads data related to “Pleasant Goat and Big Big Wolf” locally, to interact with the child at any time. The robot directly confirms with the parent whether the thing can be used to interact with the child. After getting approval from the parent, the robot can directly download related data from the network server to interact with the child. For a thing that the child dislikes, it is determined whether the thing is beneficial to growth of the child. A determining manner may be: confirming with the parent, or confirming by using the network server. A specific manner is similar to the foregoing step. When it is determined that the thing is beneficial to the growth of the child, the robot may gradually interact with the child. The robot may directly play or simulate the thing (an expression/audio/an action, or the like), and simultaneously detect a reaction of the child to the thing by using a camera. For data that the child likes (an expression of delight or the like), the robot stores related data in the local database. For data that the child dislikes (an expression of disgust or the like), if the data has been stored in the local database, the robot may directly delete the data from the local database, or may determine, after confirming with the parent, whether to delete the data; if the data has not been stored in the local database, the robot may directly not store the data, or may determine, after confirming with the parent, whether to store the data. An embodiment of the present invention further provides a service server, which may be a third-party cloud server, a child growth server, or a social public cloud server. The server includes a processor with processing and calculation capabilities and functions, to perform each method step or function for interaction with a robot in the foregoing solution. Referring toFIG.5, a server70includes a processor705, a signal transceiver702that communicates with another device, and a memory706that stores data, a program, and the like. The server70may further include various appropriate components, such as a display704and an input/output device (not shown). The various components are connected by using a bus707, and are controlled and managed by the processor. The server cooperates with the robot, sorts out simulated object data for the robot, and stores a simulated object database. The signal transceiver702receives sensing information of a companion object of a target object and emotion information of the target object that is obtained when the target object interacts with the companion object. The sensing information and the emotion information are sent by the robot. As mentioned above, the sensing information includes at least one of view information and voice information. The signal transceiver702sends, to the robot, the simulated object data generated by the processor. The processor705extracts an emotion feature quantity from the emotion information, determines, based on the emotion feature quantity, an emotional pattern used by the target object to interact with the companion object, determines, based on the emotional pattern, a degree of interest of the target object in the companion object, extracts behavioral data of the companion object from the sensing information based on the degree of interest, and screens the behavioral data to obtain the simulated object data. As mentioned above, the simulated object data is used by the robot to simulate the companion object, and the virtual simulated object is used to describe the companion object. The memory on the server is configured to store the simulated object database to record the simulated object data. In one embodiment, a parent has a data terminal, and can directly create a simulation constraint condition on the data terminal. After obtaining data, the robot or the server matches the data with the simulation constraint condition, and generates the simulated object data by using behavioral data that meets the simulation constraint condition. Alternatively, the parent directly instructs a behavior of the robot by using the data terminal or the server. The data terminal may be a remote control device that matches the robot, or may be an intelligent terminal on which an associated application is installed. A selection instruction sent by the data terminal can be received by using a transceiver of the robot or the signal transceiver of the server. When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of the present invention. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disc. | 29,725 |
11858119 | DESCRIPTION OF EMBODIMENTS Various embodiments will be described more fully hereinafter with reference to the accompanying drawings, in which some embodiments are shown. The invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this description will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, the sizes and relative sizes of layers and regions may be exaggerated for clarity. It will be understood that when an element or layer is referred to as being “on,” “connected to” or “coupled to” another element or layer, it can be directly on, connected or coupled to the other element or layer or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, there are no intervening elements or layers present. Like numerals refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the invention. Spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the exemplary term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (for example, rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include a plurality of forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Embodiments are described herein with reference to cross-sectional illustrations that are schematic illustrations of idealized embodiments (and intermediate structures). As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. For example, an implanted region illustrated as a rectangle will, typically, have rounded or curved features and/or a gradient of implant concentration at its edges rather than a binary change from implanted to non-implanted region. Likewise, a buried region formed by implantation may result in some implantation in the region between the buried region and the face through which the implantation takes place. Thus, the regions illustrated in the figures are schematic in nature and their shapes are not intended to illustrate the actual shape of a region of a device and are not intended to limit the scope of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Hereinafter, example embodiments of the invention will be described in detail with reference to the accompanying drawings. Like elements or components can be indicated by like reference numerals throughout the drawings, and the repeated explanations of like elements or components may be omitted. FIG.1is a schematic plan view illustrating a moving device in accordance with example embodiments of the invention.FIG.2is a schematic cross sectional view illustrating a moving device in accordance with example embodiments of the invention. Referring toFIGS.1and2, a moving device100according to example embodiments may include a traveling way11, a moving member13, at least one absorbing member15, at least one load supporting member17, a driving member19and a control member21. The traveling way11may provide a predetermined moving path along which the moving member19can run. The traveling way11may include, for example, a gantry for an apparatus for supplying chemical liquid, a traveling rail for an overhead hoist transport (OHT), etc. The moving member13may run over the traveling way11. Particularly, the moving member13may move in a longitudinal direction of the traveling way11where the traveling way11may extend. For example, the traveling way11may have a shape of a substantial straight line. Alternatively, the traveling way11may include straight portions and curved portions as desired. The moving member13may have a structure which may substantially enclose an upper face and sides of the traveling way11such that the moving member13may more stably run over the traveling way11. As illustrated inFIG.2, the moving member13may substantially enclose the upper face of the traveling way11, a first side of the traveling way11and a second side of the traveling way11substantially opposed to the first side thereof. In this case, the moving member13may entirely cover the upper face of the traveling way11and the first side of the traveling way11whereas the moving member13may partially cover the second side of the traveling way11. In example embodiments, the moving device100may include a plurality of absorbing members15and a plurality of load supporting members17. For example, the moving device100may include, as illustrated inFIG.1, two absorbing members15and four load supporting members17. In this case, two load supporting members17may be disposed between the first side of the traveling way11and a first inner side of the moving member13, and the two absorbing members15may be disposed between the second side of the traveling way11and a second inner side of the moving member13. The two absorbing members15may substantially correspond to the two load supporting members17, respectively. Additionally, two load supporting members17may be disposed between the upper face of the traveling way11and a bottom face of the moving member13. The load supporting members17and the absorbing members15may be separated by a substantially identical interval. However, the number of the absorbing members15and/or the number of the load supporting members17may vary in accordance with the dimensions of the moving device100, the dimensions of the apparatus for supplying chemical liquid including the moving device100, the dimensions of the OHT utilizing the moving device100, etc. When the moving device100includes the plurality of absorbing members15and the plurality of load supporting members17which may be arranged in the above-described configuration, the moving member13may stably run over the traveling way11along the moving path provided by the traveling way11while the moving member13may be separated from the traveling way11by a substantially constant distance, as described below. Further, the damages to the traveling way11and/or the moving member13may be effectively prevented by the absorbing members15and the load supporting members17. If the moving member13is tilted toward the right or the left while the moving member13runs over the traveling way11, the moving member13may be deviated from the moving path provided by the traveling way11. In other words, the moving member13may fall off the traveling way11when the moving member13is inclined to the right or to the left over the traveling way11. To this end, the load supporting members17may enable the moving member13to be separated from the traveling way11by the substantially constant distance while the moving member13moves over the traveling way11. Particularly, the load supporting members17may support the loads applied to the moving member13and the traveling way11. As such, the load supporting members17of the moving device100may support the loads applied to the moving member13and the traveling way11so that the moving member13may more stably run along the moving way while maintaining the substantially constant distance between the moving member13and the traveling way11. When the moving member100moves over the traveling way11along the moving path, the moving member100may not maintain the constant distance relative to the traveling way11because of various factors including, but not limited to, the variation of the loads applied to the moving member100and the conditions of the traveling way11, and thus the moving member100may deviate from the moving path. That is, the moving member13may fall off the traveling way11. Therefore, the moving member13and/or the traveling way11may be damaged by an impulsive load or an impulsive force applied to the traveling way11and the moving member13if the moving member13falls off the traveling way11. Considering the above problems, the moving device100may include the absorbing members15so as to prevent the moving member13from being deviated from the moving path (that is, so as to prevent the moving member13from being fallen off the traveling way11) due to the various factors including the variation of the loads applied to the moving member13and the conditions of the traveling way11while the moving member13runs over the traveling way11along the moving path. In other words, the absorbing members15may absorb the impulsive load or the impulsive force applied to the moving member13and the traveling way11such that the damages to the moving member13and/or the traveling way11may be effectively prevented. Hereinafter, the absorbing members15and the load supporting members17according to example embodiments of the invention will be described in detail. As for the moving device100according to example embodiments, the plurality of load supporting members17and the plurality of absorbing members15may be disposed between the moving member13and the traveling way11. The plurality of load supporting members17may maintain the constant distance between the moving member13and the traveling way11. Further, when moving member13runs over the traveling way11, the plurality of load supporting members17and the plurality of absorbing members15may prevent the moving member13from being inclined to the right and/or to the left over the traveling way11. Accordingly, the moving member13may stably run over the traveling way11along the moving path. The absorbing members15may absorb the impulsive load or the impulsive force applied to the moving member13and the traveling way11when the moving member13moves over the traveling way11and when the moving member13deviates from the moving path. Therefore, the absorbing members15may prevent the traveling way11and/or the moving member13being damaged by the impulsive load or the impulsive force. As described above, the moving device100may include the plurality of load supporting members17. For example, the moving device100may include the two load supporting members17disposed between the upper face of the traveling way11and the bottom face of the moving member13, and the two load supporting members17positioned between the first side of the traveling way11and the first inner side of the moving member13. The load supporting members17may be separated from each other by the substantially constant interval. Additionally, the moving device100may include the two absorbing members15disposed between the second side of the traveling way11and the second inner side of the moving member13. In this case, the plurality of load supporting members17may not be overlapped with the plurality of absorbing members15between the traveling way11and the moving member13. However, the number of the load supporting members17and the number of the absorbing members15may vary as occasion demands. According to example embodiments, the plurality absorbing members15and/or the plurality load supporting members17may include air bearings, respectively. Preferably, each of the absorbing members15may include the air bearing. When each of the load supporting members17includes the air bearing, the load supporting members17may support the load applied to the moving member13and the traveling way11using pneumatic pressures provided by the air bearings. With these load supporting members17, the moving member13may more stably runs over the traveling way11along the moving path while maintaining the substantially constant interval between the moving member13and the traveling way11. FIG.3is a schematic cross sectional view illustrating the absorbing members15of the moving device100in accordance with example embodiments of the invention. Referring toFIG.2andFIG.3, each of the absorbing members15may include an air bearing31and may further include a damping member33which may be coupled to the air bearing31. For example, the damping member33may include a spring plate connected to the air bearing31. The air bearing31may provide an air gap35between the second side of the traveling way11and the air bearing31. Therefore, the damping member33and the air bearing31may efficiently absorb impact and/or vibration relative to the moving member13and the traveling way11. Each of the absorbing members15including the air bearing31and the damping member33may more effectively absorb the impulsive load or the impulsive force applied to the moving member13and the traveling way11when the moving member13falls of the traveling way11(that is, the moving member13may deviate from the moving path). As a result, the damages to the moving member13and/or the traveling way11may be effectively prevented by the absorbing members15including the air bearings31and the damping members33. Further, the damping member33and the air bearing31may absorb the impact to the moving member13and the traveling way11and/or vibrations generated between the moving member13and the traveling way11so that the moving member13may more stably move over the traveling way11. Moreover, the substantially constant distance may be maintained between the moving member13and the traveling way11by the air gaps35provided by the air bearings31of absorbing members15. As such, the moving device100according to example embodiments may include the load supporting members17having the air bearings and the absorbing members15having the air bearings31and the damping members33, so that the moving device100may maintain the substantially constant distance between the moving member13and the traveling way11when the moving member13moves over the traveling way11, and also the moving device100may prevent the damages to the moving member13and/or the traveling way11when the moving member13falls off the traveling way11. Referring now toFIG.1, each of the absorbing members15disposed between the second side of the traveling way11and the second inner side of the moving member13may be substantially corresponded to each of the load supporting members17disposed between the first side of the traveling way11and the first inner side of the moving member13. In other words, the plurality of absorbing members15and the plurality of load supporting members17may be separated by the substantially identical interval. Further, each of the load supporting members17may be preferably disposed on a reference plane of movement when the moving member13runs along the moving path. Here, the reference plane of movement may be a plane providing a reference moving path relative to the moving path and may be located between the traveling way11and the moving member13. As described above, the moving device100of example embodiments may include the load supporting members17positioned between the first side of the traveling way11and the first inner side of the moving member13, and the load supporting members17located between the upper face of the traveling way11and the bottom face of the moving member13. In this case, the absorbing members15may be disposed between the second side of the traveling way11and the second inner side of the moving member13. Therefore, the load supporting members17and the absorbing members15may not be substantially overlapped between the traveling way11and the moving member13. The first side of the moving member13may entirely cover the first side of the traveling way11so that the first side of the moving member13may substantially fully cover the load supporting members17. Additionally, the second side of the moving member13may partially cover the second side of the traveling way11such that the second side of the moving member13may substantially cover the absorbing members15. Moreover, the moving member13may cover the load supporting members17disposed between the upper face of the traveling way11and the bottom face of the moving member13. Although not illustrated, the moving member13may have a space for receiving an object (for example, a substrate) which may be transferred along the traveling way11, or may have a member such as a hook or a ring on which the object may be hung. FIG.4andFIG.5illustrate the arrangements of the absorbing members15and the load supporting members17in accordance with example embodiments of the invention. ReferringFIG.4andFIG.5, the absorbing members15may be disposed between the second side of the traveling way11and the second inner side of the moving member13, and the load supporting members17may be positioned between the upper face of the traveling way11and the bottom face of the moving member13and between the first side of the traveling way11and the first inner side of the moving member13. Further, the first side of the moving member13may fully cover the first side of the traveling way11while the second side of the moving member13may partially cover the second side of the traveling way11. In example embodiments, it may vary the number of the load supporting members17disposed between the first side of the traveling way11and the first inner side of the moving member13, and between the upper face of the traveling way11and the bottom face of the moving member13. Additionally, it may vary the number of the absorbing members15disposed between the second side of the traveling way11and the second inner side of the moving member13. As illustrated inFIG.4, the load supporting members17may be arranged in an arrangement of two columns and two rows between the first inner side of the moving member13and the first side of the traveling way11. Further, the load supporting members17may be arranged in an arrangement of one column and two rows between the bottom face of the moving member13and the upper face of the traveling way11. As illustrated inFIG.5, the absorbing members15may be arranged in an arrangement of one column and two rows between the second inner side of the moving member13and the second side of the traveling way11. In some example embodiments, the absorbing members15may be disposed between the first inner side of the moving member13and the first side of the traveling way11. In this case, the absorbing members15may be arranged in an arrangement of two columns and two rows between the first inner side of the moving member13and the first side of the traveling way11. Additionally, the load supporting members17may be disposed between the second inner side of the moving member13and the second side of the traveling way11. Here, the load supporting members17may be arranged in an arrangement of one column and two rows between the second inner side of the moving member13and the second side of the traveling way11. Moreover, two the load supporting members17may be arranged in an arrangement of one column between the bottom face of the moving member13and the upper face of the traveling way11. As described above, the various numbers of the load supporting members17and the various numbers of absorbing members15may be arranged in various arrangements between the moving member13and the traveling way11considering various factors including the loads applied to the moving member13and the traveling way11and the conditions of the traveling way11. According to example embodiments, the moving device100may include the load supporting members17and the absorbing members15arranged in the above described arrangements so that the moving device100may more stably move over the traveling way11while maintaining the substantially constant distance between the moving member13and the traveling way11. Further, if the moving member13may be deviated from the moving path, the impulsive load or the impulsive force applied to the moving member13and/or the traveling way11such that the damaged to the moving member13and the traveling way11may be prevented. Referring now toFIG.1, the traveling way11of the moving device100may generally have the shape of substantially straight line, or alternatively the traveling way11may include the straight portions and the curved portions. The driving member19may apply the driving force to the moving member13such that the moving member13may run over the traveling way11. For example, the driving member19may include a motor such as a linear motor for moving the moving member13along the moving path. By using the driving member19, the moving member13may move over the traveling way11along one direction as well as both directions. As noted above, the moving member13may deviate from the moving path while the moving member13runs over the traveling way11. Although the absorbing members15may absorb the impulsive load or the impulsive force applied to the moving member13and/or the traveling way11, the correction of the moving member13and/or the traveling way11may be required since the moving member13falls off the traveling way11. The control member21may adjust the driving force provided by the driving member19such that the moving member13and/or the traveling way11may be corrected if the moving member13deviates from the moving path. In example embodiments, the control member21and the driving member19may cooperatively operate so as to correct the moving member13and/or the traveling way11when the moving member13deviates from the moving path. Therefore, the moving device100including the moving member13, the driving member19and the control member21may be stably transport the object if the object has increased dimensions. Hereinafter, it will be described in detail an apparatus for supplying chemical liquid including a moving device and a chemical liquid supply member in accordance with example embodiments of the invention. However, it can be understood that the invention is not limited to the apparatus for supplying chemical liquid and can be employed in other apparatuses such as an OHT. FIG.6is a schematic plan view illustrating an apparatus for supplying chemical liquid in accordance with example embodiments of the invention. Referring toFIG.6, an apparatus for supplying chemical liquid200in accordance with example embodiments may have a substantial cantilever structure. The apparatus for supplying chemical liquid200illustrated inFIG.6may include the moving device100described with reference toFIG.1toFIG.5as well as a chemical liquid supply member61. For example, the chemical liquid supply member61may include at least one ink jet head. The at least one ink jet head of the chemical liquid supply member61may include a plurality of nozzles for spraying chemical liquid onto a substrate. The plurality of nozzles may be arranged by constant intervals. Additionally, the at least one ink jet head may include a plurality of piezoelectric elements disposed adjacent to the plurality of nozzles, respectively. Here, the number of the piezoelectric elements may be substantially the same as the number of the nozzles. The chemical liquid may be provided onto the substrate by the operations of the piezoelectric elements. Particularly, the amounts of the chemical liquid sprayed onto the regions of the substrate from the nozzles may be independently adjusted by controlling voltages to the piezoelectric elements, respectively. In example embodiments, the apparatus for supplying chemical liquid200may additionally include a stage on which the substrate is placed and a transferring member for transferring the substrate. The stage may be a floating stage which may float the substrate from a surface thereof. When the apparatus for supplying chemical liquid200includes the floating stage, the apparatus for supplying chemical liquid200may further includes an air supply member capable of spraying an air onto a bottom face of the substrate and a vacuum suction member capable of providing an absorption force to the bottom face of the substrate. The transferring member may hold one side or both sides of the substrate floated over stage, and then may transfer the substrate over the floating stage. In example embodiments, the transferring member may include a guide rail, a holding member and a driving member. The guide rail may be disposed adjacent to one side of the stage. Alternatively, two guide rails may be disposed adjacent to both sides of the stage, respectively. The holding may hold one side or both sides of the substrate and may move along the guide rail. The driving member may provide a predetermined driving force to the holding member. The chemical liquid supply member61of the apparatus for supplying chemical liquid200may be provided as a package including a plurality of ink jet heads. For example, the chemical liquid supply member61may be provided as one module including three ink jet heads. Alternatively, the chemical liquid supply member61may be provided as a plurality of modules including a plurality of ink jet heads, respectively. Thus, the chemical liquid supply member61may have a relatively large volume and a relatively heavy weight. To this end, the apparatus for supplying chemical liquid200may include the moving device100described with reference toFIG.1toFIG.5for the chemical liquid supply member61having relatively large dimensions. As illustrated inFIG.6, the apparatus for supplying chemical liquid200may include a moving member13capable of supporting one side of the chemical liquid supply member61and a gantry63capable of providing a moving path for the moving member13. The gantry63may be disposed in a direction substantially perpendicular to a direction where the substrate is transferred over the stage. For example, the substrate may be transferred along a first direction and the gantry63may be arranged in a second direction substantially perpendicular to the first direction. Thus, the moving member13may run along the gantry63in the second direction while transferring the substrate in the first direction. The chemical liquid may be provided onto the substrate from the chemical liquid supply member61supported by such moving member13. Since the apparatus for supplying chemical liquid200may include the load supporting members17and the absorbing members15described with reference toFIG.1toFIG.5, the moving member13may stably move over the gantry63while maintaining a substantially constant distance between the moving member13and the gantry63. If the moving member13may deviate from a moving path provided by the gantry63, the absorbing members15may absorb an impulsive load or an impulsive force applied to the moving member13and/or the gantry63so that the damages to the moving member13and the gantry63may be prevented. In this case, the load supporting members17may be disposed between a first side of the gantry63and a first inner side of the moving member13and between an upper face of the gantry63and the bottom face of the moving member13. Additionally, the absorbing members15may be disposed between a second side of the gantry63and a second inner side of the moving member13. In example embodiments, when the moving member13supporting the chemical liquid supply member61having the increased dimensions over the gantry63, the moving member13may stably run over the gantry63by the load supporting members17even if a considerably increased load applied to the moving member13and the gantry63from the chemical liquid supply member61. Further, a considerably increased impulsive load or a considerably increased impulsive force applied to the moving member13and/or the gantry63may be effectively absorbed by the absorbing members15if the moving member13supporting the chemical liquid supply member61falls off the gantry63. Therefore, the chemical liquid may be exactly discharged onto desired regions of the substrate from the chemical liquid supply member61moved with the moving member13. In addition, the apparatus for supplying chemical liquid200may include the driving member19and the control member21illustrated inFIG.1such that driving member19and the control member21may correct the moving member13and/or the gantry63if the moving member13deviates from the moving path provided by the gantry63. According to example embodiments, the load supporting members17may enable the moving member13supporting the chemical liquid supply member61to stably run along the moving path while the substantially constant distance between the gantry63and the moving member13is maintained. Further, the absorbing members15may effectively absorb the impulsive load or the impulsive force applied to the moving member13and/or the gantry63if the moving member13falls of the gantry63so that the damages to the moving member13and the gantry63may be efficiently prevented. Moreover, the control member21and the driving member19may correct the moving member13and/or the gantry63if the moving member13deviates from the moving path. Therefore, the chemical liquid may be stably supplied onto the substrate from the chemical liquid supply member61while ensuring the stable movement of the moving member13and effectively preventing the damages to the moving member13and the gantry63. FIG.7is a schematic plan view illustrating an apparatus for supplying chemical liquid in accordance with some example embodiments of the invention.FIG.8is a schematic plan view illustrating an apparatus for supplying chemical liquid in accordance with other example embodiments of the invention.FIG.9is a schematic plan view illustrating an apparatus for supplying chemical liquid in accordance with still other example embodiments of the invention. Each of the apparatuses for supplying chemical liquid illustrated inFIG.7toFIG.9may have a dual structure including a first moving device and a second moving device. Here, each of the first moving device and the second moving device may be substantially the same as the moving device100described with reference toFIG.1toFIG.5. InFIG.7toFIG.9, each of apparatuses for supplying chemical300may have the configuration substantially the same as that of the apparatuses for supplying chemical200described with reference toFIG.6except a first gantry75, a second gantry77, a first moving member71and a second moving member73. More particularly, each of the apparatuses for supplying chemical liquid illustrated inFIG.7toFIG.9may include the chemical liquid supply member61, the stage and the transferring member described with reference toFIG.6in addition to the first moving device and the second moving device. Referring toFIG.7, the apparatus for supplying chemical liquid300may include the first gantry75, the second gantry substantially facing the first gantry75and the chemical liquid supply member61for providing chemical liquid onto a substrate. The first moving member71of the first moving device may support one side of the chemical liquid supply member61and the second moving member73of the first moving device may support another side of the chemical liquid supply member61. The first moving member71may run over the first gantry71and the second moving member73may run over the second gantry77. That is, the first moving member71may move along a first moving path provided by the first gantry71and the second moving member73may move along a second moving path provided by the second gantry77. The first moving device may include the absorbing members15and the load supporting members17whereas the second moving device may include the load supporting members17. Alternatively, the second moving device may additionally include the absorbing members15. The absorbing members15of the first and second moving devices may absorb impulsive loads or the impulsive forces applied to the first moving member71and/or the first gantry75and the second moving member73and/or the second gantry77if the first moving member71may deviate from the first moving path and/or the second moving member73may deviate from the second moving path. The load supporting members17of the first and second moving devices may support loads applied to the first moving member71and the second moving member73such that the first moving member71and the second moving member73may stably move over the first gantry75and the second gantry77, respectively while maintaining a substantially constant distance between the first moving member71and the first gantry75and maintaining a substantially constant distance between the second moving member73and the second gantry75. In some example embodiments, the load supporting members17of the first moving device may be disposed between a first side of the first gantry75and a first inner side of the first moving member71and between an upper face of the first gantry75and a bottom face of the first moving member71. The absorbing members15of the first moving device may be disposed between a second side of the first gantry75and a second inner side of the first moving member71. The load supporting members17of the second moving device may be disposed between a first side of the second gantry77and a first inner side of the second moving member73and between an upper face of the second gantry77and a bottom face of the second moving member73. The absorbing members15of the second moving device may be disposed between a second side of the second gantry77and a second inner side of the second moving member73. In other words, the load supporting members17and the absorbing members15of the first and second moving devices may be disposed among the first gantry75, the first moving member71, the second gantry77and the second moving member73such that the load supporting members17and the absorbing members15may not be overlapped, respectively. In each of the apparatuses for supplying chemical liquid300illustrated inFIG.7toFIG.9, the first moving member71and the second moving member73may independently operate, and thus an positional error may be generated between one side and another side of the chemical liquid supply member61supported by the first moving member71and the second moving member73. As a result, the first moving member71may deviate from the first moving way or the second moving member73may deviate from the second moving path. To this end, each of the apparatuses for supplying chemical liquid300may include the load supporting members17so that the first moving member71may more stably run over the first gantry75and the second moving member73may more stably run over the second gantry75while maintaining the substantially constant distance between the first moving member71and the first gantry75and maintaining the substantially constant distance between the second moving member73and the second gantry77. Further, each of the apparatuses for supplying chemical liquid300may include the absorbing members15such that the absorbing members15may efficiently absorb the impulsive load or the impulsive force applied to the first moving member71and/or the first gantry75and the impulsive load or the impulsive force applied to the second moving member73and/or the second gantry77if the first moving member71may fall off the first gantry75or the second moving member73may fall off the second gantry77. Therefore, the damages to the first moving member71, the first gantry75, the second moving member73and the second gantry77may be effectively prevented. In each of the apparatuses for supplying chemical liquid300illustrated inFIG.7toFIG.9, the first moving device may additionally include a first driving member and a first control member and the second moving device may additionally include a second driving member and a second control member. The first driving member may provide a first driving force to the first moving member71so that the first moving member71may run over the first gantry75. The second driving member may provide a second driving force to the second moving member33such that the second moving member73may run over the second gantry77. The first control member may correct the first moving member71by adjusting the first driving force of the first driving member. The second control member may correct the second moving member73by adjusting the second driving force of the second driving member. In other words, the first and second control members may cooperate with the first and second driving members to correct the first and second moving members71and73, respectively. The first and second driving members and the first and second control members may be substantially the same as the driving member19and the control member21described with reference toFIG.1. In some example embodiments, the load supporting members17may maintain the substantially constant distances between the first moving member71and the first gantry75and between the second moving member73and the second gantry75such that the load supporting members17may enable the first and second moving members71and73to stably run over the first and second gantries75and77, respectively. In addition, the absorbing members15may absorb the impulsive loads or the impulsive forces applied to the first moving member71and/or the first gantry75and the second moving member73and/or the second gantry77when the first and second moving members71and73may deviate from the first and second moving paths, and thus the absorbing members15may effectively prevent the first and second moving members71and73and the first and the second gantries75and77from being damaged. In this case, the first and second driving members and the first and second control members may correct the first moving member71and/or the first gantry75and the second moving member73and/or the second gantry77. In other example embodiments, each of the first moving member71and the second moving member may include a member such as a ring or a hook for easily moving the chemical liquid supply member61along the first and second gantries75and77. Referring now toFIG.7, the apparatus for supplying chemical liquid300of the dual structure may include the load supporting members17, the absorbing members15, the chemical liquid supply member61, the first moving member71, the second moving member73, the first gantry75and the second gantry77. The load supporting members17of the first and second moving devices may be disposed between one side of the first gantry75and one side of the first moving member71, between an upper face of the first gantry75and a bottom face of the first moving member71, between one side of the second gantry77and one side of the second moving member73, and between an upper face of the second gantry77and a bottom face of the second moving member73. Further, the absorbing members15of the first and second moving devices may be disposed between another side of the first gantry75and another side of the first moving member71and between another side of the second gantry77and another side of the second moving member73. Here, the absorbing members15and the load supporting members17may be arranged such that they are not be overlapped. In some example embodiments, the load supporting members17may not be disposed between one side of the second moving member73and one side of the second gantry77. If any members are not interposed between one side of the second moving member73and one side of the second gantry77, the second moving member73may ensure an improved degree of freedom of movement so that the second moving member73may stably move the chemical liquid supply member61. Similarly, the absorbing members15may not be disposed between another side of the second moving member73and another side of the second gantry77, and thus the second moving member73may ensure a more improved degree of freedom of movement. As a result, the chemical liquid supply member61may be more stably moved by the first moving member71and the second moving member73such that the chemical liquid may be exactly provided onto desired regions of the substrate from such a chemical liquid supply member61. In this case, one side of the second moving member73may fully cover one side of the second gantry77while another side of the second moving member73may partially cover another side of the second gantry77. In the apparatus for supplying chemical liquid300illustrated inFIG.7, the first and second moving members71and73may stably move over the first and the second gantries75and77, and also the damages to the first and second moving members71and73and the first and the second gantries75and77may be prevented by the absorbing members15and the load supporting members17of the first and second moving devices. Further, by the first and second driving members and the first and second control members of the first and second moving devices, the first and second moving members71and73may move along the first and second moving paths, and also the corrections of the first and second moving members71and73and/or the first and the second gantries75and77may be performed. Referring now toFIG.8, the apparatus for supplying chemical liquid300of the dual structure may include the absorbing members15, the load supporting members17, the chemical liquid supply member61, the first moving member71, the second moving member73, the first gantry75and the second gantry77. In other example embodiments, the load supporting members17may be disposed between an upper face of the first gantry75and a bottom face of the first moving member71, between an upper face of the second gantry77and a bottom face of the second moving member73, and between one side of the first gantry75and one side of the first moving member71. In this case, the absorbing members15may be disposed between another side of the first gantry75and another side of the first moving member71, between one side of the second gantry77and one side of the second moving member73, and between another side of the second gantry77and another side of the second moving member73. In the apparatus for supplying chemical liquid300of the dual structure illustrated inFIG.8, the load supporting members17may be adjacent to the upper face of the first gantry75, the upper face of the second gantry77and one side of the first gantry75. Additionally, the absorbing members15may be adjacent to another side of the first gantry75, one side of the second gantry77and another side of the second gantry77. If the absorbing members15exist between one side of the first gantry75and one side of the first moving member71, the stability of movement of the first moving member71and/or the second moving member73may be reduced due to the damping provided by the absorbing members15. In some example embodiments, the absorbing members15and the load supporting members17may enable the first and second moving members71and73to stably run over the first and the second gantries75and77, and also may prevent the damages to the first and second moving members71and73and the first and the second gantries75and77. Additionally, the first and second driving members and the first and second control members may enable the first and second moving members71and73to move along the first and second moving paths, and also may perform the corrections of the first and second moving members71and73and/or the first and the second gantries75and77. Referring now toFIG.9, the apparatus for supplying chemical liquid300of the dual structure may include the absorbing members15, the load supporting members17, the chemical liquid supply member61, the first moving member71, the second moving member73, the first gantry75and the second gantry77. In other example embodiments, the load supporting members17may be disposed between an upper face of the first gantry75and a bottom face of the first moving member71, between an upper face of the second gantry77and a bottom face of the second moving member73, between one side of the first gantry75and one side of the first moving member71, and between one side of the second gantry77and one side of the second moving member73. In this case, the absorbing members15may be disposed between another side of the first gantry75and another side of the first moving member71, and between another side of the second gantry77and another side of the second moving member73. In other example embodiments, the absorbing members15and the load supporting members17may enable the first and second moving members71and73to stably move over the first and the second gantries75and77, and may prevent the damages to the first and second moving members71and73and the first and the second gantries75and77. Further, the first and second driving members and the first and second control members may enable the first and second moving members71and73to run along the first and second moving paths, and also may perform the corrections of the first and second moving members71and73and/or the first and the second gantries75and77. As for the apparatuses for supplying chemical liquid300illustrated inFIG.7toFIG.9, the arrangement of the absorbing members15and/or the arrangement of the load supporting members17may be suitably adjusted as occasions demand considering the chemical liquid supply member61, the first and second moving members71and73, and the first and second gantries75and77. According to example embodiments of the invention, the apparatus for supplying chemical liquid including the moving device may easily move a substrate having relatively large dimensions and may exactly provide the chemical liquid onto the desired regions of the substrate from the chemical liquid supply member. The foregoing is illustrative of embodiments and is not to be construed as limiting thereof. Although a few embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of the invention. Accordingly, all such modifications are intended to be included within the scope of the invention as defined in the claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents but also equivalent structures. Therefore, it is to be understood that the foregoing is illustrative of various embodiments and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. | 48,698 |
11858120 | DETAILED DESCRIPTION Hereinafter, a force measuring sensor and a robot in some forms of the present disclosure will be described with reference to the drawings. Force Measuring Sensor FIG.1is a top plan view illustrating a force measuring sensor in some forms of the present disclosure, andFIG.2is a cross-sectional view illustrating a signal generating part of the force measuring sensor and a layered structure of a PCB in some forms of the present disclosure. As illustrated inFIGS.1and2, a force measuring sensor10according to the present disclosure may include a wire100, a signal generating part200having one side fixed to one end portion of the wire100, and a signal processing part300configured to convert and process an analog signal received from the signal generating part200into a digital signal. In this case, the digital signals converted by the signal processing part300may be 16-bit data, but the type of data is not limited thereto. Tension of the wire100may be changed by power provided from a power source such as an actuator. For example, as described below, the wire100may be a tendon provided in a robot arm in order to operate the robot arm. Continuing to refer toFIG.2, an internal space S may be formed in the signal generating part200, and the wire100may be provided to penetrate the internal space S. As described above, the wire100may be configured such that the tension of the wire100is changed by the power provided from the power source such as the actuator. When the tension of the wire100is changed, a force applied to the signal generating part200to which the wire100is fixed is also changed. For example, as illustrated inFIG.2, in a case in which the wire100is fixed to a lower surface of a component220of the signal generating part200, a magnitude of a force applied downward to the signal generating part200by the wire100is also changed. The force measuring sensor10according to the present disclosure may be configured to generate the analog signal in response to the change in tension of the wire100. In more detail, according to the present disclosure, the analog signal may be generated by a change in thickness of the component of the signal generating part200caused by the change in tension of the wire100. That is, according to the present disclosure, i) the change in tension of the wire100, ii) the change in force applied to the signal generating part200by the wire100, iii) the change in thickness of the component of the signal generating part200, and iv) the generation of the analog signal may sequentially occur in a time series manner. In particular, according to the present disclosure, since the wire100penetrates the internal space S and is fixed directly to the signal generating part200without a separate component, the force measuring sensor10may be manufactured without a separate fixing member, such that the structure of the sensor may be simplified, and the miniaturization of the sensor may be implemented. Continuing to refer toFIG.2, the force measuring sensor10according to the present disclosure may further include a PCB400provided under the signal generating part200and the signal processing part300and configured such that the signal generating part200and the signal processing part300are in close contact with the PCB400. For example, the signal generating part200and the signal processing part300may be in close contact with an upper surface of the PCB400. In this case, the wire100may be provided to penetrate the PCB400. Therefore, the wire100may penetrate the internal space S of the signal generating part200via the PCB400, and then may be fixed to the lower surface of the component220of the signal generating part200. Meanwhile, the signal generating part200of the force measuring sensor10according to the present disclosure may have a structure in which a plurality of components is laminated. In more detail, the signal generating part200may include an electrode210provided on an upper portion of the PCB400and provided to be in close contact with the PCB400, a plate220provided to be spaced apart upward from the electrode210, and a dielectric layer230provided between the electrode210and the plate220and provided to be in close contact with the electrode210and the plate220. According to the present disclosure, an assembly of the electrode210, the plate220, and the dielectric layer230may function as a kind of capacitor. That is, the electrode210and the plate220may be charged with electric charges with the dielectric layer230interposed therebetween. Hereinafter, the quantity of charged electric charges is referred to as a charge quantity. Meanwhile, the charge quantity Q of the capacitor is not only proportional to a potential difference between polar plates, that is, a potential difference V between the electrode210and the plate220, but also proportional to an electrostatic capacity C of the capacitor. In addition, the electrostatic capacity is proportional to an area of the polar plate, that is, an area of the electrode210and an area of the plate220, whereas the electrostatic capacity is inversely proportional to an interval between the polar plates, that is, an interval between the electrode210and the plate. In this case, it can be seen that the interval between the electrode210and the plate220corresponds to a thickness of the dielectric layer230. The analog signal generated by the signal generating part200of the force measuring sensor10in some forms of the present disclosure may be generated by a change in thickness of the dielectric layer230. In more detail, in some forms of the present disclosure, the analog signal may be generated by a change in electrostatic capacity of the signal generating part200caused by the change in thickness of the dielectric layer230. That is, in some forms of the present disclosure, the wire100may be fixed to a lower surface of the plate220. Therefore, when the tension of the wire100is changed, a pressing force applied to the dielectric layer230by the plate220is changed, and the thickness of the dielectric layer230is changed by the change in pressing force. The analog signal generated by the change in electrostatic capacity caused by the change in thickness of the dielectric layer230may be transmitted to the signal processing part300. Continuing to refer toFIG.2, the signal generating part200of the force measuring sensor10according to the present disclosure may further include a shield part240provided between the electrode210and the PCB400. The shield part240may be configured to prevent a phenomenon in which electric charges are inadvertently stored in regions other than the capacitor configured by the electrode210, the plate220, and the dielectric layer230. For example, the shield part240may be an AC shield for preventing the occurrence of parasitic capacitance. Meanwhile, in some forms of the present disclosure, the electrode210and the shield part240may be inserted into the PCB400. In this case, since the shield part240is provided under the electrode210as described above, only an upper surface of the electrode210, between the electrode210and the shield part240, may be exposed to the outside. For example,FIG.2illustrates a state in which the upper surface of the electrode210and the upper surface of the PCB400are provided on the same plane. In addition, as illustrated inFIG.2, according to the present disclosure, through holes may be formed in the PCB400, the shield part240, the electrode210, and the dielectric layer230, respectively, and the through holes, which are formed in the PCB400, the shield part240, the electrode210, and the dielectric layer230may communicate with one another to form the internal space S. In contrast, the plate220may have no through hole. This is to prevent the wire100from being exposed to the outside. Meanwhile, in some forms of the present disclosure, the dielectric layer230may include a conductive filler and resin. Therefore, the thickness of the dielectric layer230according to the present disclosure may be comparatively greatly changed by external force. That is, according to the present disclosure, the change in thickness of the dielectric layer230in accordance with the change in tension of the wire100may be maximized, and the magnitude of the analog signal may also be increased, such that sensitivity of the force measuring sensor10may also be significantly improved. FIG.3is a top plan view illustrating a force measuring sensor in some forms of the present disclosure. As illustrated inFIG.3, the force measuring sensor10may have a plurality of signal generating parts200. For example,FIG.3illustrates a state in which the force measuring sensor10has six signal generating parts200a,200b,200c,200d,200e, and200f. As illustrated inFIG.3, in the case in which the plurality of signal generating parts is provided, a plurality of wires is also provided, and as a result, it is possible to measure changes in tension of the plurality of wires provided in the single force measuring sensor10. Meanwhile, according to another example of the present disclosure, the plurality of signal generating parts200a,200b,200c,200d,200e, and200fand the signal processing part300may be mounted on the single PCB400. Robot FIGS.4and5are perspective views illustrating exemplary structures of robot arms of robots in which the force measuring sensor in some forms of the present disclosure is mounted. Referring toFIGS.1to5, a robot in some forms of the present disclosure may include the force measuring sensor10. In this case, the force measuring sensor10according to the present disclosure may include the wire100, the signal generating part200having one side fixed to one end portion of the wire100, and the signal processing part300configured to process an analog signal received from the signal generating part200into a digital signal. In addition, the wire100may be provided to penetrate the internal space S formed in the signal generating part200. The analog signal may be generated by a change in thickness of the component of the signal generating part200caused by the change in tension of the wire100. The component may be the dielectric layer230as described above. Meanwhile, the above-mentioned description of the force measuring sensor10according to the present disclosure may also be equally applied to the robot according to the present disclosure. In more detail, the robot in some forms of the present disclosure may include a robot arm1, and the force measuring sensor10may be provided on an end portion of the robot arm1. The wire100provided in the force measuring sensor10of the robot arm1in some forms of the present disclosure may be a configuration for imitating a tendon provided inside a human arm. Therefore, when the wire100is pulled by the power source such as the actuator, the tension of the wire100is increased, and thus a linkage structure of the robot arm1is moved. In this case, in order to precisely control the robot arm1, it is necessary to precisely measure the tension of the wire100. In some forms of the present disclosure, it is possible to precisely measure the tension of the wire100on the basis of the analog signal generated in response to the change in thickness of the dielectric layer230provided in the force measuring sensor10. Therefore, in some forms of the present disclosure, the precise control for the robot arm1may be implemented. For example, the robot arm1of the robot in some forms of the present disclosure may be a robot arm provided on a surgical robot. However, the type of robot is not limited thereto. The present disclosure has been described with reference to the limited forms and the drawings, but the present disclosure is not limited thereto. The described forms may be carried out in various forms by those skilled in the art to which the present disclosure pertains within the technical spirit of the present disclosure and within the scope equivalent to the appended claims. | 11,990 |
11858121 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be appreciated, however, by those of ordinary skill in the art, that the disclosed techniques may be practiced without these specific details or with an equivalent arrangement. To mitigate the problems described herein, the inventors had to both invent solutions and, in some cases just as importantly, recognize problems overlooked (or not yet foreseen) by others in the fields of robotics, and computer vision, electrical engineering, and machine learning. Indeed, the inventors wish to emphasize the difficulty of recognizing those problems that are nascent and will become much more apparent in the future should trends in industry continue as the inventors expect. Further, because multiple problems are addressed, it should be understood that some embodiments are problem-specific, and not all embodiments address every problem with traditional systems described herein or provide every benefit described herein. That said, improvements that solve various permutations of these problems are described below. Many robots are not well suited to detect leaks at gas processing facilities. They often lack the appropriate sensor suite, as many robot-mounted cameras cannot detect gas that, in the visual spectrum, is often transparent. Further, such robots often generate excessive heat that impairs the operation of sensors suitable for detecting leaks, and many robotic imaging systems capture large amounts of video data without regard to whether those videos depict the types of features of interest in gas leak detection, leaving users with large volumes of video data to wade through after the fact. None of which is to suggest that any techniques are disclaimed. To mitigate some or all of these problems, in some embodiments, a robot may use a sensor system to inspect for gas leaks. The sensor system may obtain path information (e.g., inspection path information) that indicates a path for the robot to travel. The path information may indicate locations along the path to inspect with the sensor system, or in some cases, the robot may be configured to detect potential leaks in the field and determine to inspect at a location on the fly. Each location may be associated with information indicating a particular view of the location that is to be captured via one or more robot-mounted cameras. For example, the path information may indicate the location of a pipe fitting (e.g., a first location) and a tank (e.g., a second location) within a facility that is to be inspected using the sensor system. The information may include orientation information indicating a pose (e.g., position and orientation) that a red green blue (RGB) camera and an OGI camera should be placed in to record an image (e.g., or video) of each location. The information may indicate a distance (e.g., a minimum distance, target distance, or a maximum distance) from an object to move to record an image. The robot may move along the path or move to a starting location of the path, for example, in response to obtaining the path information. The robot may move autonomously. For example, after receiving input to move along a path, the robot may move to each location on the path without further input from a user. The robot may move with assistance from a teleoperator. The teleoperator may input commands to the robot and the robot may move between locations in the path based on the input. The sensor system102may determine (e.g., based on information received via the location sensor112), that the robot108is at a location of the plurality of locations indicated by the path information. For example, a location sensor of the robot may match the first location (e.g., a pipe at a facility) indicated by the path information. In response to determining that the robot is at the first location, the sensor system102may adjust the OGI camera based on first orientation information associated with the first location. For example, the sensor system102may rotate to a position that allows the pipe to be captured in a view of the camera. Additionally or alternatively, the sensor system102may adjust a zoom or other camera setting of a camera. The sensor system may receive an indication that the orientation of the camera matches the orientation information for the location. For example, the robot may use an actuator to adjust the position of the sensor system to allow an object to be within a field of view of a camera of the sensor system. After adjusting the sensor system, the robot may send a message to the sensor system indicating that the sensor is in the appropriate orientation for the first location. The sensor system may record video or images with a camera (e.g., an RGB camera, an OGI camera, a thermal imaging camera, etc.), for example, in response to receiving the indication that the orientation of the OGI camera matches the orientation information for the location. Or in some cases, video may be recorded continuously, and sections of video may be flagged as relevant in response to such an indication. The sensor system or robot may store the recorded video or images in memory before sending the first video and the second video to a server. FIG.1shows an example robot system100for detection of gas leaks. The robot system100may include a sensor system102, a user device104, a server106, or a robot108. The sensor system102may include a location sensor112, a red green blue (RGB) camera113, an optical gas imaging (OGI) camera114, a cooling system115, or a machine learning (ML) subsystem116. The sensor system102, robot108, server106, and user device104may communicate via the network150. The RGB camera113and the OGI camera114may be located in the sensor system102such that the field of view of the RGB camera113and the field of view of the OGI camera114overlap. For example, the field of view of the RGB camera113may encompass the field of view of the OGI camera114. The RGB camera may include or be a variety of different color cameras (e.g., RGB cameras may include RYB and RYYB cameras), and the term “RGB camera” should not be read as limited to color cameras that include only or all of red, blue, and green channel sensors, e.g., for the present purposes, a RYYB camera should be understood to be a type of RGB camera. The cooling system115may include an air circulation fan, a liquid cooling system, or a variety of other cooling systems. In some embodiments, the robot108may avoid one or more locations based on permissions associated with a user that is logged into the robot108. Various users may be able to log into the robot108to monitor the robot's progress. One or more locations may be restricted from viewing by one or more users. The robot108may obtain information (e.g., a user identification, IP address, MAC address, etc.) indicating a user that is logged into the robot108or the sensor system102. The robot108or sensor system102may determine or obtain location permissions associated with the user. For example, the location permissions may be received from the server106. The robot108may cause, based on the location permissions associated with the user, the robot to skip one or more locations of the plurality of locations. For example, the robot108may determine (e.g., based on the location permissions) that the user logged in is restricted from the second location indicated in the path information. After completing inspection of the first location, the robot108may skip the second location and continue to the third location, for example, in response to determining that the user is restricted from the second location. The robot108, or a remote server, may control one or more components of the sensor system102. The robot108may determine to turn down (e.g., decrease cooling, or turn off) the cooling system115, for example, if (e.g., in response to determining that) the OGI camera114is not in use. The cooling system115may be used to keep the sensor system102below a threshold temperature, for example, so that the sensor system102may use one or more cameras, communicate with the robot108, or perform other functions described herein. The cooling system115may be used to maintain the OGI camera114below a threshold temperature to reduce noise detected by a sensor of the OGI camera114below that of the signal from the scene being imaged. The OGI camera114may use a spectral filter method that enables it to detect a gas compound. The spectral filter may be mounted in front of a detector of the OGI camera114. The detector or filter may be cooled to prevent or reduce radiation exchange between the filter and the detector. The filter may restrict the wavelengths of radiation that pass through to the detector (e.g., via spectral adaptation). For example, the filter may restrict wavelengths outside the range of 3-5 micrometers from reaching the detector. The robot108may turn off one or more components of the sensor system102, for example, while traveling between locations indicated in the path information. The robot108may determine that a distance to a location satisfies a threshold (e.g., is greater than a threshold distance). In response to determining that the distance is greater than the threshold distance, the robot108may send a command to the sensor system102. The command may cause the cooling system115(e.g., or a compressor associated with the cooling system115) of the OGI camera114to turn off. The robot108may move along the path and in response to determining that the robot108is at a location (e.g., or is within a threshold distance of a location) of the plurality of locations, may send a second command to the sensor system102. The second command may cause the cooling system115(e.g., or a compressor associated with the cooling system115) of the OGI camera114to turn on, or some embodiments may modulate an intensity of cooling in cooling systems that support a continuous range of cooling intensities. In some embodiments, the robot108may cause the sensor system102to turn off the location sensor112, cameras113-114, or cooling system115(e.g., or a compressor of the cooling system115) while the robot108is docked at a charging station. The robot108may determine that the robot108is charging a power source. In response to determining that the robot108is charging the power source, the robot108may cause the sensor system102to turn off the cooling system115. In some embodiments, the robot108may cause the cooling system115to turn up (e.g., to decrease the temperature) or turn on, for example, even when one or more cameras113-114are not in use. The robot108may receive (e.g., from the sensor system102), an indication that a temperature within the sensor system102is above a threshold temperature. In response to receiving the indication that a temperature within the sensor system102is above a threshold temperature, the robot108may cause the cooling system to turn on or adjust a temperature of the sensor system102. Alternatively, the sensor system102may monitor its internal temperature and may cause the cooling system115to turn off or on as needed. Additionally or alternatively, the robot108may search for a cool location (e.g., away from the sun, more than a threshold distance away from one or more heat generating objects in a facility, etc.) to wait, for example, in response to receiving the indication that a temperature within the sensor system102is above the threshold temperature. The robot108may determine a shady location to wait. The robot108may record one or more images of an environment surrounding the robot and may determine a shady location based on pixel intensities, contrast, or other factors. Additionally or alternatively, the robot108may determine, based on inputting a portion of the one or more images into a machine learning model, a first location within the environment that receives less sunlight than a second location within the environment. The robot108may move to the first location, and may cause the cooling system115to run, for example, until a temperature of the sensor system102is below the threshold temperature. The cooling system115or a compressor of the cooling system115may be configured to cool a sensor of the OGI camera114. For example, the cooling system115may be able to cool a sensor of the OGI camera114to below 78 degrees Kelvin. The sensor of the OGI camera114may include an infrared detector and a spectral filter. The spectral filter may prevent light outside a range of 3-5 micrometers from reaching the infrared detector. The cooling system115may include a Peltier system (e.g., with a thermoelectric cooler that cools by causing direct current to flow through a semiconductor of the cooling system) or a refrigeration cycle cooling system (e.g., with refrigerant, a compressor, and a condenser). The sensor system102and robot108may periodically send messages (e.g., heartbeat messages) or status updates to each other. The messages may indicate battery levels of the sensor system102or robot108, or other indications such as error messages. In some embodiments, the robot108may determine that more than a threshold amount of time (e.g., 5 seconds, 30 seconds, 5 minutes, 30 minutes, etc.) has transpired since receiving a message (e.g., heartbeat message) from the sensor system102. In response to determining that more than a threshold amount of time has transpired, the robot108may move to a charging station associated with the robot108. Additionally or alternatively, the robot108may (e.g., in response to determining that more than a threshold amount of time has transpired) send a request to the server106. The request may indicate a diagnostic procedure for the server106to perform on the sensor system102. For example, the server106may cause the sensor system102to reboot or reboot one or more components (e.g., the cooling system, etc.). Additionally or alternatively, the robot108may, in response to determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system102, cause a power source of the robot to charge a power source (e.g., a battery) of the sensor system102or cause the sensor system102to reboot. The robot108or sensor system102may determine (e.g., based on information received via the location sensor), that the robot108is at a location of the plurality of locations indicated in the path information. For example, a location sensor of the robot108may match the first location (e.g., a pipe, a tank, or other location) indicated in the path information. In some embodiments, the location sensor may allow triangulation with on-site beacons, may detect magnets (e.g., implanted in the floor, ceiling, walls, etc.), optical codes (e.g., aprilTags, ArUco markers, bar codes, etc.) positioned in the facility, may use satellite navigation (e.g., Galileo, Global Navigation Satellite System, Global Positioning System, etc.). In some embodiments, the robot108may be in a facility (e.g., a factory, other building, etc.) where the robot108or sensor system102are unable to receive GPS signals. The robot108may use simultaneous localization and mapping (SLAM) techniques to determine whether the robot108is at a location indicated by the path information, for example, if the robot108is unable to receive GPS signals. Some embodiments may navigate without building a 3D map of the environment (e.g., without SLAM), for example by representing the world as a locally consistent graph of waypoints and edges. In some embodiments, the sensor system102may include a location sensor or the robot108may include a location sensor. A location sensor may be implemented with visual sensing (e.g., via a camera). A location sensor may be implemented without sensing GPS signals or wireless beacons, or some embodiments may use these signals in addition to other location sensors. In some locations or facilities, the robot108may be unable to detect GPS or other wireless communication and may rely on visual SLAM, radar, or lidar systems to navigate the facility. the sensor system102or the robot108may store one or more maps (e.g., one or more electronic maps) associated with a facility. A map may be learned by the robot108or sensor system102via SLAM techniques (e.g., particle filter, extended Kalman filter, GraphSLAM, etc.). The maps may correspond to different configurations that the facility may be in. For example, some facilities may have a first configuration (e.g., layout of different objects) for one time period (e.g., winter season) and a second configuration for a different portion of the year (e.g., summer season). The sensor system102or robot108may use a map corresponding to the layout to assist in navigating the facility to perform an inspection. The path information may be indicated by the map. The map may indicate the locations of one or more aspects of a facility (e.g., location of stairs, pipes, boilers, storage tanks, or a variety of other objects). In some embodiments, the robot108may begin using a first map to navigate, determine that it is unable to recognize its surroundings, and in response may obtain or begin using a different map. For example, the sensor system102or server106may receive, from the robot108, navigation information indicating that the robot108is lost. The navigation information may be generated based on a determination that an environment (e.g., one or more objects in a facility) surrounding the robot108does not match information indicated by the first map. The navigation information may include one or more images of the environment surrounding the robot108, or the last known location of the robot108. In response to receiving the navigation information, the server106or the sensor system102may send a second map (e.g., a second map that is determined to match the navigation information received from the robot108). The second map may include an indication of each location of the plurality of locations. In some embodiments, it may be determined that a map that matches the environment surrounding the robot108does not exist or cannot be obtained. In response to determining that the map cannot be obtained, the robot108may generate a map of the environment (e.g., via SLAM techniques). In response to determining that the robot108is at the location indicated by the path information, the robot may adjust the OGI camera114based on orientation information associated with the location. For example, the sensor system102may rotate to a position that allows an object (e.g., a pipe, storage tank, etc.) to be captured in a view of the cameras113-114. In response to determining that the robot108is at the location, the sensor system102may adjust one or more other settings of the sensor system102. The one or more other settings may include zoom, flash (or persistent onboard illumination), or other settings of the OGI camera114or the RGB camera113. For example, at a first location the sensor system102may zoom in10X if indicated by the path information. Additionally or alternatively, the sensor system102may determine an adjustment to be made based on the orientation (e.g., pose) of the robot108. The adjustment may allow a region of interest at the first location to be captured within a field of view of a camera (e.g., camera113, camera114, etc.) of the sensor system102. For example, to take a picture of a particular pipe, the sensor system102may need to be adjusted depending on whether the robot108is facing the pipe or facing a different direction. As an additional example, the sensor system102may be rotated to point upwards to view a location above the robot108. In some embodiments, the orientation information may include a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot. The robot108may adjust its position to match the indicated roll, pitch, and yaw. The sensor system102may send a request (e.g., to the robot108or the server106) for position information associated with the robot108. The position information may include pose information indicating the position and orientation of a coordinate frame of the robot108. The pose information may include configuration information, which may include a set of scalar parameters that specify the positions of all of the robot's points relative to a fixed coordinate system. The position information may indicate positions of one or more joints of the robot108, a direction that the robot108is facing, or other orientation information associated with the robot108(e.g., roll, pitch, yaw, etc.). In response to sending the request, the sensor system102may receive pose information indicating a pose of the robot108relative to the first location of the plurality of locations. The sensor system102or robot108may adjust, based on the pose information, a position of the sensor system102such that the OGI camera114is facing the first location. The sensor system102may receive an indication that the orientation of the cameras113or114matches the orientation information associated with a location. For example, the robot108may use an actuator to adjust the position of the sensor system102to allow the pipe to be within view of a camera of the sensor system102and after adjusting the sensor system102, the robot108may send a message to the sensor system102indicating that the sensor system102is in the appropriate orientation for the location. The sensor system102may record video or images with a camera (e.g., the RGB camera113, the OGI camera114, a thermal imaging camera, etc.), for example, in response to receiving the indication that the pose (position and orientation) of the camera matches the pose information for the location. The sensor system102may determine that a location should be inspected from multiple view points, for example, based on the path information. The determination to inspect from multiple view points may be made while at the location. For example, the sensor system102may determine that an object does not fit within a single frame of a camera and may take additional pictures to capture the entire object. The determination may be made autonomously by the sensor system102or via teleoperation (e.g., by receiving a command from a teleoperator to take additional pictures/video from additional view points). The determination to take additional pictures may be in response to determining that a leak exists at the location. The sensor system102may determine that a first portion of an object at the location has been recorded by the OGI camera114. In response to determining that the first portion of the object at the first location has been recorded, the sensor system102may cause the robot108to move so that a second portion of the object is within a field of view of the OGI camera114or other camera. The sensor system102may record the second portion of the object with one or more cameras (e.g., the OGI camera114or RGB camera113). For example, a portion of a pipe may be 20 feet long and the sensor system102may cause the robot108to move to multiple locations along the pipe so that the pipe may be inspected using the OGI camera114. In some embodiments, the sensor system102may determine that the robot108is too close to a location or object for purposes of recording the location or object. The sensor system102may send a message to the robot108to move to a location that is suitable for recording an object or location (e.g., a location such that an entire object fits within a field of view of the cameras113-114). For example, the robot108may send a message to cause the sensor system102to record a video (e.g., or one or more images) with the OGI camera114or RGB camera113. The robot108may receive, from the sensor system102, an indication that the robot108is too close to the location or object to be recorded. In response to receiving the indication that the robot108is too close to the location or object, the robot108may move a threshold distance away from the first location. The robot108may cause the sensor system102to record the object or location with the RGB camera113or OGI camera114. In some embodiments, the sensor system102may determine that lighting is inadequate and the sensor system102may illuminate an object with a light (e.g., an onboard light) of the sensor system102while recording an image or video. The sensor system or robot may store the recorded video (which includes images) or images in memory, e.g., caching video for a plurality of waypoints along a route before upload. The sensor system102may send the first video and the second video to the server106. Sending a video or an image may include streaming, caching, and on-board transformation of the video before sending (e.g., detecting possible gas leaks via the robot108and only uploading segments of the video/downressing/compression/cropping to exclude regions not including leaks, etc.). The resolution of an image may be reduced before sending to the server106or storing the image, for example, if no gas leak is detected in the image. The sensor system102or the robot108may transform the captured video and upload or send the transformed version of the captured video to the server106. In some embodiments, the robot108, sensor system102, or server106may use image processing, computer vision, machine learning, or a variety of other techniques to determine whether a gas leak exists at one or more locations indicated by the path information. The sensor system102may determine that a gas leak exists based on one or more images or video recorded by the cameras113-114. The sensor system102may record, via the RGB camera113, an image of the first location, for example, in response to determining that a gas leak exists at the first location. In some embodiments, the reference image may be an image (e.g., a historical image) of the location that was taken during a previous inspection of the location. The sensor system102may obtain, from a database, a historical image of the location. The historical image may depict an absence of a gas leak at the first location on a date that the historical image was taken. Additionally or alternatively, the sensor system102may use Gaussian Mixture Model-based foreground and background segmentation methods, for example, such as those described in “Background Modeling Using Mixture of Gaussians for Foreground Detection: A Survey,” by Thierry Bouwmans, published on Jan. 1, 2008, which is hereby incorporated by reference. In some embodiments, the sensor system102, the robot108, or the server106may use a machine learning model to determine whether a leak exists at a location. For example, the ML subsystem116may use one or more machine learning models described in connection withFIG.7to detect whether a gas leak exists. The sensor system102may determine a reference image that depicts an absence of a gas leak at the location on a date that the reference image was taken. The sensor system102may generate, based on inputting a first image (e.g., taken by the OGI camera114) and the reference image into a machine learning model, a similarity score. The similarity score may be determined by generating vector representations of the reference image and the first image and comparing the vector representations using a distance metric (e.g., Euclidean distance, cosine distance, etc.). The sensor system102may compare the similarity score with a threshold similarity score. The sensor system102may determine that a gas leak exists at the first location, for example, based on comparing the similarity score with a threshold similarity score. One or more images may be preprocessed before being input into a machine learning model. For example, an image may be resized, one or more pixels may be modified (e.g., the image may be converted from color to grayscale), thresholding may be performed, or a variety of other preprocessing techniques may be used prior to inputting the image into the machine learning model. As described herein, inputting an image into a machine learning model may comprise preprocessing the image. In some embodiments the ML subsystem may use a machine learning model to generate a vector representation (e.g., a vector in a latent embedding space, such as a vector generated by a trained autoencoder responsive to an image or series thereof like in video, etc.) of one or more images taken by the OGI camera114. The sensor system102may determine that a gas leak exists or does not exist, for example, based on a comparison of the vector representations. The sensor system102may obtain a plurality of historical images associated with the location (e.g., images taken on previous inspections of the location). The sensor system102or server106may generate, based on inputting the historical images into a machine learning model, a first vector representation of the images. The sensor system102may generate, based on inputting a second plurality of images (e.g., images taken during the current inspection) into a machine learning model, a vector representation of the second plurality of images; and determining, based on a comparison of the first vector with the second vector, that a gas leak exists at the first location. In some embodiments, the sensor system102may send images (e.g., in videos, in video format, or as standalone images) to the server106so that the server106may train one or more machine learning models to detect gas leaks. The images may be labeled by a user that monitors the robot108. For example, the label may indicate whether a gas leak exists or not. The sensor system102may record (e.g., via the OGI camera) a set of images including an image for each location of the plurality of locations indicated by the path information. The sensor system102may generate a label for each image in the set of images. Each label may indicate a location associated with a corresponding image. The sensor system102may send the set of images to the server106for use in training a machine learning model to detect gas leaks. The server106may provide a user interface for users to annotate one or more images received from the sensor system102or robot108. The server106may receive a first set of images, associated with one or more locations indicated by the path information. The server106may generate a webpage including a user interface that is configured to receive input corresponding to one or more portions of an image. The input may be received from a user and may indicate whether or not a gas leak is shown in the image. The server106may receive, via the webpage, input indicate that a gas leak exists in the one or more images. The server106may generate, based on the input, a training data set for a machine learning model. For example, the input received from a user may indicate a label to use for each image in the dataset (e.g., whether a gas leak exists or not). In some embodiments, the machine learning model may be an object detection model with one or more convolutional layers and may be trained to detect whether a plume of gas exists in an image taken by an OGI camera114. The sensor system102may determine, based on output from the machine learning model, a target location within the plume of gas. For example, the machine learning model may an object detection and localization model configured to output the coordinates (in pixel space) of a bounding box surrounding the plume of gas and the sensor system102may use the coordinates to determine a target location (e.g., the center of the bounding box) within the plume of gas. In some embodiments, the sensor system102may include a laser sensor and may use the laser sensor to detect a concentration level of gas at the determined target location. The sensor system102may send a message (e.g., an alert) to the server106, for example, based on determining that the concentration level of gas at the target location exceeds a threshold concentration level. In some embodiments, the machine learning model used to detect whether a gas leak exists may be a Siamese network model. A Siamese network model may be used, for example, if training data is sparse. The Siamese network may be able to determine that an image of a location where a gas leak exists is different from one or more previous images taken of the location when no gas leak existed at the location. The Siamese network may be trained using a dataset comprising image pairs of locations. The image pairs may include positive image pairs (e.g., with two images that show no gas leak at a location). The image pairs may include negative image pairs (e.g., with one image that shows no gas leak at the location and a second image that shows a gas leak at the location). The Siamese network may be trained using a training dataset that includes sets comprising an anchor image, a positive image, and a negative image. The anchor image may be an first image associated with a location when no gas leak is present. The positive image may be a second image (e.g., different from the first image) of the location when no gas leak is present. The negative image may be an image of the location when a gas leak exists at the location. The Siamese network may use a triplet loss function when using sets of images with anchor, positive, and negative images. The Siamese network may be trained to generate vector representations such that a similarity metric (e.g., Euclidean distance, Cosine similarity, etc.) between vector representations of an anchor image and a positive image is smaller than a similarity metric between vector representations of an anchor image and a negative image. The Siamese network may comprise two or more identical subnetworks. The subnetworks may have the same architecture (e.g., types and numbers of layers) and may share the same parameters and weights. In some embodiments, the Siamese network may be implemented as a single network that takes as input two images (e.g., one after the other) and generates vector representations for each of the input images. Weights updated in one subnetwork may be updated in each other subnetwork in the same manner (e.g., corresponding weights in each subnetwork may be the same). Each subnetwork may include a convolutional neural network (e.g., with one or more convolutional or depthwise convolutional layers), an image encoding layer (e.g., a fully connected layer), a distance function layer, or an output layer (e.g., using a sigmoid activation function). In some embodiments, the distance function layer may be used as the output layer, for example, if a contrastive loss function is used. The distance function layer may output a distance value (e.g., Euclidean distance, or a variety of other distance metrics) indicating how similar two images are. For example, a first image of the two images may be an image taken when no gas leak existed at the location, and a second image of the two images may be an image taken during the current inspection. During training, the Siamese network may take as input a first image of a first location (e.g., when no gas leak is present), generate encodings of the first image, then without performing any updates on weights or biases of the network, may take as input a second image. The second image may be an image of a second location or may be an image of the first location when a gas leak is present at the first location. The sensor system102may determine that a leak exists, for example, if the distance corresponding to two images is below a threshold. The Siamese network may use a loss function (e.g., binary cross-entropy, triplet loss, contrastive loss, or a variety of other loss functions). The Siamese network may be trained via gradient descent (e.g., stochastic gradient descent) and backpropagation. The Siamese network may be trained to determine a threshold value (e.g., that is compared with output from the distance function layer) that maximizes (e.g., as determined according to a loss function used by the Siamese network) correct classifications and minimizes incorrect ones. For example, the Siamese network may be trained to determine that an image with a leak (e.g., an image taken by the OGI camera114showing a plume of gas) does not match an image of the same location where no gas leak exists. The Siamese network may output an indication of whether two input images are the same or different. If one image is a historical image with no gas leak present and one image is a new image with a gas leak (e.g., the new image shows a plume of gas), the Siamese network may generate output indicating that the images do not match and the sensor system102may determine that a gas leak is present at the corresponding location. The sensor system102may send an alert or message to the server106or a user device104if the sensor system102determines that a gas leak exists at a location. The sensor system102may make an initial determination of whether there is a gas leak at a location and then send the images to the server106for confirmation. The sensor system102may use a first machine learning model to make the determination and the server106may use a second machine learning model to confirm the determination made by the sensor system102. The machine learning model used by the server106may include more parameters (e.g., weights, layers, etc.) than the machine learning model used by the sensor system102or may obtain higher accuracy, or precision on a test set of images. The server106may request additional images be recorded, for example, if there is a conflict between the determination made by the sensor system102and the server106. For example, additional images may be requested if the sensor system102determines that there is a gas leak at the location and the server106(e.g., using the same images used by the sensor system) determines that no gas leak exists at the location. The server106may receive images (e.g., taken by the OGI camera114) and may determine, based on inputting the images into a machine learning model, that there is no gas leak at the first location. In response to determining that there is no gas leak at the first location, the server106may send an indication that no gas leak was detected in the first plurality of images or the server106may send a request (e.g., to the sensor system102) to record a second plurality of images of the first location. The second plurality of images may be used by the server106to confirm whether or not a gas leak exists at the location. In response to receiving the request to record additional images, the sensor system102may cause the robot108to move a threshold distance, for example, to obtain a different viewpoint of the location. The sensor system102may record additional images or video of the location from the different viewpoint. Using a different view to take additional pictures may enable the sensor system102or server106to confirm whether or not a gas leak is present at the location. The server106may cause the robot to move to the next location of the plurality of locations indicated by the path information, for example, in response to determining that there is no gas leak at the first location. The sensor system102may determine that a gas leak exists at a location by applying motion detection techniques (e.g., background subtraction) to one or more images taken by the OGI camera114. For example, the sensor system102may generate a subtraction image by using a first image as a reference image and subtracting one or more subsequent images (e.g., images recorded after the first image) from the reference image. The sensor system102may generate a binary image by thresholding the subtraction image. The binary image may indicate gas leaks as white pixels. For example, if more than a threshold number of white pixels are detected in the binary image, the sensor system102may determine that a gas leak exists at the location. The server106may process images or video recorded by the sensor system102to determine whether a gas leak is present at a location. The server106may use one or more machine learning models to analyze the first video or the second video to determine whether a gas leak is present at a location. In response to sending images or video to the server106, the sensor system102may receive, from the server106, a message indicating that the images or video do not indicate a gas leak. The message may indicate that no gas leak was detected in the images or video. For example, the server106may input the images or video into a machine learning model described in connection withFIG.4. The sensor system102may cause the robot108to continue along the path, for example, in response to receiving the message from the server106. For example, after receiving a message from the server106indicating that no gas leak was detected at a first location, the sensor system102may cause the robot108to move to a second location indicated by the path information. In some embodiments, the message from the server106may indicate that additional video should be recorded with the optical gas imaging camera114or the RGB camera113. In some embodiments the server106may obtain the images or video, determine that the images or video indicate a gas leak exists, and in response to determining that the first video indicates a gas leak, the server106may cause the sensor system102to record an additional video with the OGI camera114at the corresponding location. In some embodiments, the sensor system102or the robot108may flag an image for review by a user. For example, the sensor system102or the robot108may send an image to a user to confirm whether or not a gas leak exists at a location (e.g., even without the user making the final determination of whether a gas leaks exists). In some embodiments, the server106may oversee or control a plurality of robots. The robots may be located in various places throughout an environment or facility. The server106may determine a robot108that is closest to a location to be inspected and may cause the robot108to inspect the location. The server106may obtain the inspection path information. The server106may obtain location information corresponding to the plurality of robots (e.g., including one or more instances of the robot108). The server106may determine, based on the location information, that the robot108is closer to the first location than other robots of the plurality of robots and in response to determining that the robot108is closer to the first location than other robots of the plurality of robots, may cause the robot108to inspect the first location or may send the inspection path information to the robot108. The sensor system102, the robot108, or the server106may communicate with each other and may share information indicating resource levels or other information. For example, the sensor system102may share information indicating a battery level of the sensor system102, whether one or more components (e.g., the location sensor112, the RGB camera113, the OGI camera114, the cooling system115, or the ML subsystem116) is working properly. The sensor system102may receive information indicating a resource level of the robot108or whether one or more components of the robot108is working properly. The sensor system102may receive battery power from the robot108. For example, if a battery level of the sensor system102is below a threshold and a battery level of the robot108is above a threshold (e.g., the same threshold or a different threshold), the sensor system102may draw power from the battery of the robot108. In some embodiments, the server may determine to use one robot over other robots based on status information associated with the robots or sensor systems associated with corresponding robots of the plurality of robots. The status information may indicate a battery level, temperature or other information about the robot. For example, the server106may receive information indicating that a temperature of the sensor system102of a robot is between a threshold range of temperatures and that the robot is within a threshold distance of the location. In response, the server106may determine to cause the robot to inspect the location. The sensor system102may receive information indicating a battery level of the robot. The sensor system102may determine (e.g., based on the information indicating a battery level of the robot), that the battery level of the robot satisfies a threshold. For example, the battery level may be below a threshold amount required to complete an inspection of a facility. The sensor system102may determine that the robot108may need to conserve battery for moving between locations on the inspection path. The sensor system102may (e.g., one or more locations of the plurality of locations along the path have yet to be recorded. in response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system. In response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system. The robot108may be an bipedal robot, a wheeled robot (such as one with mecanum or omni wheels), a quadruped robot (like Spot™ from Boston Dynamics™ of Boston Massachusetts), a track-drive robot, a articulated robot (e.g., an arm having two, six, or ten degrees of freedom, etc.), a cartesian robot (e.g., rectilinear or gantry robots, robots having three prismatic joints, etc.), Selective Compliance Assembly Robot Arm (SCARA) robots (e.g., with a donut shaped work envelope, with two parallel joints that provide compliance in one selected plane, with rotary shafts positioned vertically, with an end effector attached to an arm, etc.), delta robots (e.g., parallel link robots with parallel joint linkages connected with a common base, having direct control of each joint over the end effector, which may be used for pick-and-place or product transfer applications, etc.), polar robots (e.g., with a twisting joint connecting the arm with the base and a combination of two rotary joints and one linear joint connecting the links, having a centrally pivoting shaft and an extendable rotating arm, spherical robots, etc.), cylindrical robots (e.g., with at least one rotary joint at the base and at least one prismatic joint connecting the links, with a pivoting shaft and extendable arm that moves vertically and by sliding, with a cylindrical configuration that offers vertical and horizontal linear movement along with rotary movement about the vertical axis, etc.), self-driving car, a kitchen appliance, construction equipment, or a variety of other types of robots. The robot may include one or more cameras, joints, servomotors, stepper motor actuators, servo motor actuators, pneumatic actuators, or a variety of other components. In some embodiments, the robot108may include wheels, continuous tracks, or a variety of other means for moving, e.g., with or without a tether. In some embodiments, the robot108may be a drone or other flying robot that is capable of flying to each location indicated in the path information. In some embodiments, the robot108may be a boat or submarine that is capable of inspecting locations underwater. In some embodiments, the robot108may be a drone capable of traveling in outer space and to inspect one or more locations on a space station or other spacecraft. The system100may include one or more processors. Some processors might be in the robot108, in the server106, or in the sensor system102. Instructions for implementing one or more aspects described herein may be executed by the one or more processors. The instructions may be distributed among the robot108, the server106, or the sensor system102. The system100may be compliant with one or more regulations set forth by the Environmental Protection Agency. For example, the system100may be configured to perform inspections that show whether a facility is compliant with requirements in “Oil and Natural Gas Sector: Emission Standards for New, Reconstructed, and Modified Sources” in the Federal Register (“2016 NSPS OOOOa”) or its corresponding 2018 Proposal. FIG.2Ashows an example sensor system201, that may include any component or perform any function described above in connection with the sensor system102ofFIG.1. The sensor system201may include an OGI camera205(e.g., which may be the same as the OGI camera114ofFIG.1), an RGB camera210(e.g., which may be the same as the RGB camera113ofFIG.1), and a case215to house the components of the sensor system201. The sensor system201may include any component discussed in connection withFIG.8below. FIG.2Bshows an exploded view of the sensor system201with an exploded view of the RGB camera210and the OGI camera205. The sensor system201may include a printed circuit board220. The printed circuit board220may include a central processing unit, a graphics processing unit, a vision processing unit (e.g., a microprocessor designed to accelerate machine vision tasks), or a variety of components such as those described in connection withFIG.8below. The vision processing unit may be suitable for executing machine learning models or other computer vision techniques such as those described in connection withFIG.1orFIG.7. The sensor system102described in connection withFIG.1may include a vision processing unit to assist in performing one or more machine learning operations described. The sensor system201may include a cooling system225(e.g., which may be the same as the cooling system115ofFIG.1). The sensor system201may include one or more batteries216. FIG.3shows an example robot301(e.g., the robot108) with the sensor system320(e.g., the sensor system102) attached. The robot301may include sensors310and sensors312. The sensors310-312may include RGB cameras, infrared cameras, depth sensing cameras, or a variety of other sensors. The cameras may be stereo cameras that provide black and white images and video. The robot301may be communicatively coupled with the sensor system320. The robot301may be able to cause the sensor system322to rotate up/down or from side to side via a joint322or other means, e.g., joint322may include three degrees of freedom independently actuated by the robot with servo motors of stepper motors (e.g., pitch, roll, and yaw). The robot301may include one or more legs315for moving in an environment, each being actuated by two or more such actuators in some cases. FIG.4shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process400may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At405, robot system100(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At410, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may cause the robot to move along the path. At415, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may determine that the robot is at a first location of the path. The determination may be made based on information received via a location sensor of the robot or a location sensor of the sensor system. At420, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the robot to adjust one or more cameras based on first orientation information associated with the first location. At425, robot system100(e.g., the sensor system102or the server106(FIG.1) or computing system800(FIG.8)) may receive an indication that the orientation of the camera matches the orientation information associated with the first location. At430, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may record video or one or more images of the first location with one or more cameras (e.g., via a camera of the sensor system). The sensor system may record a first video with the OGI camera and a second video with the RGB camera, for example, in response to receiving an indication OGI camera matches the orientation information. At435, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may store one or more recorded images or videos of the first location in memory. It is contemplated that the actions or descriptions ofFIG.4may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG.4may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.4. FIG.5shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process500may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At505, the robot108(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At510, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may determine (e.g., based on information received via a location sensor) that a distance between a location of the robot and a first location on the inspection path is greater than a threshold distance. At515, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may cause a compressor of an OGI camera to turn off, for example, in response to determining that the distance is greater than the threshold distance. At520, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may move along the path indicated by the inspection path information. At525, the robot108(e.g., using one or more components in system100(FIG.1) or computing system800(FIG.8)) may cause the compressor to turn on, for example, in response to determining that the robot is within a threshold distance of the first location (e.g., within 10 feet of the first location, etc.). At530, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the sensor system to record video. At535, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the sensor system to store the video in memory. It is contemplated that the actions or descriptions ofFIG.5may be used with any other embodiment of this disclosure, as is generally the case with the various features described herein. In addition, the actions and descriptions described in relation toFIG.5may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.5. FIG.6shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process600may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At605, robot system100(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At610, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may cause the robot to move along the path. At615, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may determine that the robot is at a first location of the path. The determination may be made based on information received via a location sensor of the robot or a location sensor of the sensor system. At620, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the robot to adjust one or more cameras based on first orientation information associated with the first location. At625, robot system100(e.g., the sensor system102or the server106(FIG.1) or computing system800(FIG.8)) may record images of a first location indicated by the path information. The sensor system may record video or one or more images of the first location with one or more cameras (e.g., via a camera of the sensor system). The sensor system may record a first video with the OGI camera and a second video with the RGB camera, for example, in response to receiving an indication OGI camera matches the orientation information. At630, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may determine (e.g., based on one or more images or videos recorded by the sensor system) that a gas leak exists at the first location. For example, the sensor system102may use background subtraction between two images recorded by the OGI camera to detect movement (e.g., of gas) in the images. Additionally or alternatively, the sensor system may input one or more images into a machine learning model (e.g., as described inFIG.4) to detect whether gas is present at a location. At635, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may send an indication that a gas leak exists at the first location. It is contemplated that the actions or descriptions ofFIG.6may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG.6may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.6. One or more models discussed above may be implemented (e.g., in part), for example, as described in connection with the machine learning model742ofFIG.7. With respect toFIG.7, machine learning model742may take inputs744and provide outputs746. In one use case, outputs746may be fed back to machine learning model742as input to train machine learning model742(e.g., alone or in conjunction with user indications of the accuracy of outputs746, labels associated with the inputs, or with other reference feedback and/or performance metric information). In another use case, machine learning model742may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs746) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another example use case, where machine learning model742is a neural network and connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to them to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model742may be trained to generate results (e.g., response time predictions, sentiment identifiers, urgency levels, etc.) with better recall, accuracy, and/or precision. In some embodiments, the machine learning model742may include an artificial neural network. In such embodiments, machine learning model742may include an input layer and one or more hidden layers. Each neural unit of the machine learning model may be connected with one or more other neural units of the machine learning model742. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of one or more of its inputs together. Each connection (or the neural unit itself) may have a threshold function that a signal must surpass before it propagates to other neural units. The machine learning model742may be self-learning or trained, rather than explicitly programmed, and may perform significantly better in certain areas of problem solving, as compared to computer programs that do not use machine learning. During training, an output layer of the machine learning model742may correspond to a classification, and an input known to correspond to that classification may be input into an input layer of machine learning model during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output. For example, the classification may be an indication of whether an action is predicted to be completed by a corresponding deadline or not. The machine learning model742trained by the ML subsystem116may include one or more embedding layers at which information or data (e.g., any data or information discussed above in connection withFIGS.1-3) is converted into one or more vector representations. The one or more vector representations of the message may be pooled at one or more subsequent layers to convert the one or more vector representations into a single vector representation. The machine learning model742may be structured as a factorization machine model. The machine learning model742may be a non-linear model and/or supervised learning model that can perform classification and/or regression. For example, the machine learning model742may be a general-purpose supervised learning algorithm that the system uses for both classification and regression tasks. Alternatively, the machine learning model742may include a Bayesian model configured to perform variational inference, for example, to predict whether an action will be completed by the deadline. The machine learning model742may be implemented as a decision tree and/or as an ensemble model (e.g., using random forest, bagging, adaptive booster, gradient boost, XGBoost, etc.). FIG.8is a diagram that illustrates an exemplary computing system800in accordance with embodiments of the present technique. Various portions of systems and methods described herein, may include or be executed on one or more computer systems similar to computing system800. Further, processes and modules described herein may be executed by one or more processing systems similar to that of computing system800. Computing system800may include one or more processors (e.g., processors810a-810n) coupled to system memory820, an input/output I/O device interface830, and a network interface840via an input/output (I/O) interface850. A processor may include a single processor or a plurality of processors (e.g., distributed processors). A processor may be any suitable processor capable of executing or otherwise performing instructions. A processor may include a central processing unit (CPU) that carries out program instructions to perform the arithmetical, logical, and input/output operations of computing system800. A processor may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions. A processor may include a programmable processor. A processor may include general or special purpose microprocessors. A processor may receive instructions and data from a memory (e.g., system memory820). Computing system800may be a units-processor system including one processor (e.g., processor810a), or a multi-processor system including any number of suitable processors (e.g.,810a-810n). Multiple processors may be employed to provide for parallel or sequential execution of one or more portions of the techniques described herein. Processes, such as logic flows, described herein may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating corresponding output. Processes described herein may be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Computing system800may include a plurality of computing devices (e.g., distributed computer systems) to implement various processing functions. I/O device interface830may provide an interface for connection of one or more I/O devices860to computing system800. I/O devices may include devices that receive input (e.g., from a user) or output information (e.g., to a user). I/O devices860may include, for example, graphical user interface presented on displays (e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor), pointing devices (e.g., a computer mouse or trackball), keyboards, keypads, touchpads, scanning devices, voice recognition devices, gesture recognition devices, printers, audio speakers, microphones, cameras, or the like. I/O devices860may be connected to computing system800through a wired or wireless connection. I/O devices860may be connected to computing system800from a remote location. I/O devices860located on remote computer system, for example, may be connected to computing system800via a network and network interface840. Network interface840may include a network adapter that provides for connection of computing system800to a network. Network interface840may facilitate data exchange between computing system800and other devices connected to the network. Network interface840may support wired or wireless communication. The network may include an electronic communication network, such as the Internet, a local area network (LAN), a wide area network (WAN), a cellular communications network, or the like. System memory820may be configured to store program instructions870or data880. Program instructions870may be executable by a processor (e.g., one or more of processors810a-810n) to implement one or more embodiments of the present techniques. Instructions870may include modules of computer program instructions for implementing one or more techniques described herein with regard to various processing modules. Program instructions may include a computer program (which in certain forms is known as a program, software, software application, script, or code). A computer program may be written in a programming language, including compiled or interpreted languages, or declarative or procedural languages. A computer program may include a unit suitable for use in a computing environment, including as a stand-alone program, a module, a component, or a subroutine. A computer program may or may not correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one or more computer processors located locally at one site or distributed across multiple remote sites and interconnected by a communication network. System memory820may include a tangible program carrier having program instructions stored thereon. A tangible program carrier may include a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may include a machine readable storage device, a machine readable storage substrate, a memory device, or any combination thereof. Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like. System memory820may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors810a-810n) to cause the subject matter and the functional operations described herein. A memory (e.g., system memory820) may include a single memory device and/or a plurality of memory devices (e.g., distributed memory devices). I/O interface850may be configured to coordinate I/O traffic between processors810a-810n, system memory820, network interface840, I/O devices860, and/or other peripheral devices. I/O interface850may perform protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processors810a-810n). I/O interface850may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard. Embodiments of the techniques described herein may be implemented using a single instance of computing system800or multiple computer systems800configured to host different portions or instances of embodiments. Multiple computer systems800may provide for parallel or sequential processing/execution of one or more portions of the techniques described herein. Those skilled in the art will appreciate that computing system800is merely illustrative and is not intended to limit the scope of the techniques described herein. Computing system800may include any combination of devices or software that may perform or otherwise provide for the performance of the techniques described herein. For example, computing system800may include or be a combination of a cloud-computing system, a data center, a server rack, a server, a virtual server, a desktop computer, a laptop computer, a tablet computer, a server device, a client device, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a vehicle-mounted computer, or a Global Positioning System (GPS), or the like. Computing system800may also be connected to other devices that are not illustrated, or may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided or other additional functionality may be available. Those skilled in the art will also appreciate that while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computing system800may be transmitted to computing system800via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network or a wireless link. Various embodiments may further include receiving, sending, or storing instructions or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present disclosure may be practiced with other computer system configurations. In block diagrams, illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated. The functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted, for example such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g., within a data center or geographically), or otherwise differently organized. The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine readable medium. In some cases, third party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may be provided by sending instructions to retrieve that information from a content delivery network. The reader should appreciate that the present application describes several disclosures. Rather than separating those disclosures into multiple isolated patent applications, applicants have grouped these disclosures into a single document because their related subject matter lends itself to economies in the application process. But the distinct advantages and aspects of such disclosures should not be conflated. In some cases, embodiments address all of the deficiencies noted herein, but it should be understood that the disclosures are independently useful, and some embodiments address only a subset of such problems or offer other, unmentioned benefits that will be apparent to those of skill in the art reviewing the present disclosure. Due to costs constraints, some features disclosed herein may not be presently claimed and may be claimed in later filings, such as continuation applications or by amending the present claims. Similarly, due to space constraints, neither the Abstract nor the Summary sections of the present document should be taken as containing a comprehensive listing of all such disclosures or all aspects of such disclosures. It should be understood that the description and the drawings are not intended to limit the disclosure to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. Further modifications and alternative embodiments of various aspects of the disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description and the drawings are to be construed as illustrative only and are for the purpose of teaching those skilled in the art the general manner of carrying out the disclosure. It is to be understood that the forms of the disclosure shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed or omitted, and certain features of the disclosure may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the disclosure. Changes may be made in the elements described herein without departing from the spirit and scope of the disclosure as described in the following claims. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,”, “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing steps A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor1performs step A, processor2performs step B and part of step C, and processor3performs part of step C and step D), unless otherwise indicated. Similarly, reference to “a computer system” performing step A and “the computer system” performing step B can include the same computing device within the computer system performing both steps or different computing devices within the computer system performing steps A and B. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. Unless otherwise indicated, statements that “each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every. Limitations as to sequence of recited steps should not be read into the claims unless explicitly specified, e.g., with explicit language like “after performing X, performing Y,” in contrast to statements that might be improperly argued to imply sequence limitations, like “performing X on items, performing Y on the X′ ed items,” used for purposes of making claims more readable rather than specifying sequence. Statements referring to “at least Z of A, B, and C,” and the like (e.g., “at least Z of A, B, or C”), refer to at least Z of the listed categories (A, B, and C) and do not require at least Z units in each category. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. Features described with reference to geometric constructs, like “parallel,” “perpendicular/orthogonal,” “square”, “cylindrical,” and the like, should be construed as encompassing items that substantially embody the properties of the geometric construct, e.g., reference to “parallel” surfaces encompasses substantially parallel surfaces. The permitted range of deviation from Platonic ideals of these geometric constructs is to be determined with reference to ranges in the specification, and where such ranges are not stated, with reference to industry norms in the field of use, and where such ranges are not defined, with reference to industry norms in the field of manufacturing of the designated feature, and where such ranges are not defined, features substantially embodying a geometric construct should be construed to include those features within 15% of the defining attributes of that geometric construct. The terms “first”, “second”, “third,” “given” and so on, if used in the claims, are used to distinguish or otherwise identify, and not to show a sequential or numerical limitation. As is the case in ordinary usage in the field, data structures and formats described with reference to uses salient to a human need not be presented in a human-intelligible format to constitute the described data structure or format, e.g., text need not be rendered or even encoded in Unicode or ASCII to constitute text; images, maps, and data-visualizations need not be displayed or decoded to constitute images, maps, and data-visualizations, respectively; speech, music, and other audio need not be emitted through a speaker or decoded to constitute speech, music, or other audio, respectively. Computer implemented instructions, commands, and the like are not limited to executable code and can be implemented in the form of data that causes functionality to be invoked, e.g., in the form of arguments of a function or API call. To the extent bespoke noun phrases (and other coined terms) are used in the claims and lack a self-evident construction, the definition of such phrases may be recited in the claim itself, in which case, the use of such bespoke noun phrases should not be taken as invitation to impart additional limitations by looking to the specification or extrinsic evidence. In this patent filing, to the extent any U.S. patents, U.S. patent applications, or other materials (e.g., articles) have been incorporated by reference, the text of such materials is only incorporated by reference to the extent that no conflict exists between such material and the statements and drawings set forth herein. In the event of such conflict, the text of the present document governs, and terms in this document should not be given a narrower reading in virtue of the way in which those terms are used in other materials incorporated by reference. The present techniques will be better understood with reference to the following enumerated embodiments:1. A method comprising: obtaining inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system, wherein each location of the plurality of locations is associated with orientation information indicating an orientation that the RGB camera and the OGI camera should be placed in to record a video; causing the robot to move along the path; determining, based on information received via the location sensor, that the robot is at a first location of the plurality of locations; in response to determining that the robot is at the first location, causing the robot to adjust the OGI camera based on first orientation information associated with the first location; receiving, from the robot, an indication that an orientation of the OGI camera matches the first orientation information; in response to receiving the indication that the orientation of the OGI camera matches the first orientation information, recording a first video with the OGI camera and a second video with the RGB camera; and sending the first video and the second video to the server.2. The method of any of the preceding embodiments, further comprising: receiving, from the server, a message indicating that the first video and the second video do not indicate a gas leak; and in response to receiving the message, causing the robot to move to a second location of the plurality of locations.3. The method of any of the preceding embodiments, further comprising: receiving information indicating a battery level of the robot; determining, based on the information indicating a battery level of the robot, that the battery level of the robot satisfies a threshold; determining that one or more locations of the plurality of locations along the path have yet to be recorded; and in response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system.4. The method of any of the preceding embodiments, wherein the OGI camera comprises an indium antimonide detector.5. The method of any of the preceding embodiments, wherein the OGI camera comprises a quantum well infrared photodetector.6. The method of any of the preceding embodiments, wherein causing the robot to adjust the OGI camera based on first orientation information associated with the first location cause operations further comprising: sending, to the robot, a request for pose information associated with the robot; in response to sending the request, receiving pose information indicating a pose of the robot relative to the first location of the plurality of locations; and adjusting, based on the pose information, a position of the sensor system such that the OGI camera is facing the first location.7. The method of any of the preceding embodiments, wherein the inspection path information is associated with a first map obtained by the robot, the method further comprising: receiving, from the robot, navigation information indicating that the robot is lost, wherein the navigation information is generated based on a determination that an environment surrounding the robot does not match the first map; and in response to receiving the navigation information, sending a second map with an indication of each location of the plurality of locations to the robot.8. The method of any of the preceding embodiments, further comprising: determining, based on an image received from the robot, the second map from a plurality of maps corresponding to the facility.9. The method of any of the preceding embodiments, wherein the inspection path information is associated with a first map obtained by the robot, wherein the first map indicates one or more objects in the facility, the method further comprising: receiving, from the robot, an image of an environment surrounding the robot and navigation information indicating that the robot is lost, wherein the navigation information is generated based on a determination that an environment surrounding the robot does not match the first map; and in response to determining that the image does not correspond to any map of a plurality of maps associated with the facility, causing the robot to generate a new map of the facility.10. The method of any of the preceding embodiments, wherein the first orientation information comprises a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot.11. The method of any of the preceding embodiments, wherein causing the robot to move along the path comprises: receiving information indicating a user that is logged into the robot; determining location permissions associated with the user; and causing, based on the location permissions associated with the user, the robot to skip a second location of the plurality of locations.12. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot.13. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, sending a request to the server, wherein the request indicates a diagnostic procedure for the server to perform on the sensor system.14. The method of any of the preceding embodiments, wherein the instructions for causing the robot to adjust the OGI camera based on first orientation information associated with the first location, when executed, cause the one or more processors to perform operations further comprising: determining that a first portion of an object at the first location has been recorded by the OGI camera; and in response to determining that a first portion of an object at the first location has been recorded, causing the robot to move so that a second portion of the object is within a field of view of the OGI camera.15. The method of any of the preceding embodiments, wherein the server performs operations comprising: receiving the first video and the second video; determining that the first video indicates a gas leak; and in response to determining that the first video indicates a gas leak, causing the sensor system to record an additional video with the OGI camera at the first location.16. The method of any of the preceding embodiments, wherein the server performs operations comprising: receiving, from the sensor system, the first video; determining, based on inputting the first video into a machine learning model, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, sending an indication that no gas leak was detected in the first plurality of images to the sensor system and a request indicating that the robot should move to a second location of the plurality of locations.17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot.18. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a temperature of the sensor system is between a threshold range of temperatures.19. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a battery level of the robot is above a threshold battery level.20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-19.21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-19.22. A system comprising means for performing any of embodiments 1-19. Embodiments May Include 1. A method comprising: receiving inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system; determining, based on information received via the location sensor, that a distance between a location of the robot and a first location of the plurality of locations is greater than a threshold distance; in response to determining that the distance is greater than the threshold distance, sending a first command to the sensor system, wherein the first command causes the compressor of the OGI camera to turn off; moving along the path; in response to determining that the robot is at a first location of the plurality of locations, sending a second command to the sensor system, wherein the second command causes the compressor of the OGI camera to turn on; causing the sensor system to record a first video with the OGI camera and a second video with the RGB camera; and causing the sensor system to send the first video and the second video to the server.2. The method of any of the preceding embodiments, further comprising: determining that the robot is charging a power source of the robot; and in response to determining that the robot is charging the power source, causing the sensor system to turn off the compressor.3. The method of any of the preceding embodiments, further comprising: receiving, from the sensor system, an indication that a temperature within the sensor system is above a threshold temperature; and in response to receiving the indication that a temperature within the sensor system is above a threshold temperature, causing the compressor to turn on.4. The method of any of the preceding embodiments, further comprising: in response to receiving the indication that a temperature within the sensor system is above a threshold temperature, recording a picture of an environment surrounding the robot; determining, based on inputting a portion of the picture into a machine learning model, a first location within the environment that receives less sunlight than a second location within the environment; moving to the first location; and causing the compressor to run until a temperature of the sensor system is below the threshold temperature.5. The method of any of the preceding embodiments, wherein a first field of view of the RGB camera encompasses a second field of view of the OGI camera.6. The method of any of the preceding embodiments, wherein the compressor is configured to cool the sensor to below 78 degrees Kelvin.7. The method of any of the preceding embodiments, wherein the sensor comprises an infrared detector and a spectral filter, wherein the spectral filter that prevents light outside a range of 3-5 micrometers from reaching the infrared detector.8. The method of any of the preceding embodiments, wherein the cooling system comprises a second compressor, a refrigerant, and a condenser.9. The method of any of the preceding embodiments, wherein the cooling system comprises a thermoelectric cooler that cools the sensor system by causing direct current to flow through a semiconductor of the cooling system.10. The method of any of the preceding embodiments, wherein recording a first video with the OGI camera and a second video with the RGB camera comprises: adjusting, based on orientation information associated with the first location, an orientation of the robot, wherein the orientation information comprises an indication of a pitch, a yaw, and a roll of the robot; and in response to adjusting the orientation, recording the first video and the second video.11. The method of any of the preceding embodiments, further comprising: receiving information indicating a first battery level of the sensor system; determining, based on the information indicating a first battery level of the sensor system, that the first battery level of the sensor system is above a first threshold; determining that a second battery level of the robot is below a second threshold; and based on determining that the first battery level is above the first threshold and the second batter level is below the second threshold, changing a source of power of the robot from a battery of the robot to the battery of the sensor system.12. The method of any of the preceding embodiments, further comprising: generating, via a camera of the robot, an image of an environment adjacent to the robot; determining, based on a comparison of the image with a first map, that the first map does not match the environment; and in response to determining that the first map does not match the environment, sending, to the server, the image and a request for an updated map that corresponds to the image.13. The method of any of the preceding embodiments, further comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot.14. The method of any of the preceding embodiments, further comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, causing the sensor system to reboot.15. The method of any of the preceding embodiments, further comprising: in response to determining that more than a threshold amount of time has transpired, causing a battery of the robot to charge a battery of the sensor system.16. The method of any of the preceding embodiments, wherein causing the sensor system to record a first video with the OGI camera and a second video with the RGB camera comprises: causing the sensor system to record the first video with the OGI camera; receiving, from the sensor system, an indication that the robot is too close to the first location; in response to receiving the indication that the robot is too close to the first location, moving a threshold distance away from the first location; and causing the sensor system to record the second video with the RGB camera.17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot.18. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a temperature of the sensor system is between a threshold range of temperatures.19. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a battery level of the robot is above a threshold battery level.20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-21.21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-21.22. A system comprising means for performing any of embodiments 1-21. Embodiments May Include 1. A method comprising: obtaining inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system, wherein each location of the plurality of locations is associated with orientation information indicating an orientation that the RGB camera and the OGI camera should be placed in to record a video; causing the robot to move along the path; determining, based on information received via the location sensor, that the robot is at a first location of the plurality of locations; in response to determining that the robot is at the first location, causing the robot to adjust the OGI camera based on first orientation information associated with the first location; recording a first plurality of images with the OGI camera; determining, based on the first plurality of images, that a gas leak exists at the first location; and sending, to the server, an indication that a gas leak exists at the first location.2. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: in response to determining that a gas leak exists at the first location, recording, via the RGB camera, an image of the first location; and sending the image to the server.3. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: determining a reference image associated with the first location; generating a subtracted image by subtracting a first image of the first plurality of images from the reference image; and determining, based on the subtracted image, that a gas leak exists at the first location.4. The method of any of the preceding embodiments, wherein determining a reference image comprises: obtaining, from a database, a historical image of the first location, wherein the historical image depicts an absence of a gas leak at the first location on a date that the historical image was taken.5. The method of any of the preceding embodiments, wherein determining a reference image comprises: selecting, from the first plurality of images, an image that was recorded before any other image of the first plurality of images.6. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: determining a reference image, wherein the reference image depicts an absence of a gas leak at the first location on a date that the reference image was taken; generating, based on inputting a first image of the first plurality of images and the reference image into a machine learning model, a similarity score; comparing the similarity score with a threshold similarity score; and based on comparing the similarity score with a threshold similarity score, determining that a gas leak exists at the first location.7. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: obtaining a second plurality of historical images associated with the first location; generating, based on inputting the first plurality of images into a machine learning model, a first vector representation of the first plurality of images; generating, based on inputting the second plurality of images into a machine learning model, a second vector representation of the second plurality of historical images; and determining, based on a comparison of the first vector with the second vector, that a gas leak exists at the first location.8. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: recording, via the OGI camera, a set of images, wherein the set of images comprises an image for each location of the plurality of locations; generating a label for each image in the set of images, wherein each label indicates a location associated with a corresponding image and indicates whether a gas leak was detected in the corresponding image; and sending the set of images to the server for use in training a machine learning model.9. The method of any of the preceding embodiments, wherein the instructions for determining that a gas leak exists at the first location, when executed, cause the one or more processors to perform operations comprising: inputting the first plurality of images into a machine learning model, that a plume of gas exists at the first location; and in response to inputting the first plurality of images into the machine learning model, classifying one or more objects in the first plurality of images as a plume of gas.10. The method of any of the preceding embodiments, further comprising: determining, based on output from the machine learning model and based on the first plurality of images, a target location within the plume of gas; sensing, via a laser sensor of the sensor system, a concentration level of gas at the target location; and based on determining that the concentration level of gas at the target location exceeds a threshold concentration level, sending an alert to the server.11. The method of any of the preceding embodiments, wherein the server is configured to perform operations comprising: receiving, from the sensor system, a first set of images, wherein the first set of images is associated with one or more locations of the first plurality of locations; generating a webpage comprising a user interface, wherein the user interface is configured to receive input on one or more portions of an image; receive, via the webpage, input corresponding to one or more images of the first set of images, wherein the input indicates a gas leak exists in the one or more images; and generating, based on the input and the first set of images, a training data set for a machine learning model.12. The method of any of the preceding embodiments, wherein the first orientation information comprises a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot.13. The method of any of the preceding embodiments, wherein causing the robot to move along the path, when executed, cause the one or more processors to perform operations further comprising: receiving information indicating a user that is logged into the robot; determining location permissions associated with the user; and causing, based on the location permissions associated with the user, the robot to skip a second location of the plurality of locations.14. The method of any of the preceding embodiments, wherein the server is configured to perform operations comprising: receiving, from the sensor system, the first plurality of images; determining, based on inputting the first plurality of images into a machine learning model, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, sending an indication that no gas leak was detected in the first plurality of images and a request to record a second plurality of images of the first location.15. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: in response to receiving the request, recording a second plurality of images of the first location; determining, based on the second plurality of images, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, causing the robot to move to a second location of the plurality of locations.16. The method of any of the preceding embodiments, further comprising: in response to receiving the request, causing the robot to move closer to the first location; recording a second plurality of images of the first location; and sending the second plurality of images to the server.17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot.18. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, sending a request to the server, wherein the request indicates a diagnostic procedure for the server to perform on the sensor system.19. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot.20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-19.21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-19.22. A system comprising means for performing any of embodiments 1-19. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. | 113,711 |
11858122 | DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be appreciated, however, by those of ordinary skill in the art, that the disclosed techniques may be practiced without these specific details or with an equivalent arrangement. To mitigate the problems described herein, the inventors had to both invent solutions and, in some cases just as importantly, recognize problems overlooked (or not yet foreseen) by others in the fields of robotics, and computer vision, electrical engineering, and machine learning. Indeed, the inventors wish to emphasize the difficulty of recognizing those problems that are nascent and will become much more apparent in the future should trends in industry continue as the inventors expect. Further, because multiple problems are addressed, it should be understood that some embodiments are problem-specific, and not all embodiments address every problem with traditional systems described herein or provide every benefit described herein. That said, improvements that solve various permutations of these problems are described below. Many robots are not well suited to detect leaks at gas processing facilities. They often lack the appropriate sensor suite, as many robot-mounted cameras cannot detect gas that, in the visual spectrum, is often transparent. Further, such robots often generate excessive heat that impairs the operation of sensors suitable for detecting leaks, and many robotic imaging systems capture large amounts of video data without regard to whether those videos depict the types of features of interest in gas leak detection, leaving users with large volumes of video data to wade through after the fact. None of which is to suggest that any techniques are disclaimed. To mitigate some or all of these problems, in some embodiments, a robot may use a sensor system to inspect for gas leaks. The sensor system may obtain path information (e.g., inspection path information) that indicates a path for the robot to travel. The path information may indicate locations along the path to inspect with the sensor system, or in some cases, the robot may be configured to detect potential leaks in the field and determine to inspect at a location on the fly. Each location may be associated with information indicating a particular view of the location that is to be captured via one or more robot-mounted cameras. For example, the path information may indicate the location of a pipe fitting (e.g., a first location) and a tank (e.g., a second location) within a facility that is to be inspected using the sensor system. The information may include orientation information indicating a pose (e.g., position and orientation) that a red green blue (RGB) camera and an OGI camera should be placed in to record an image (e.g., or video) of each location. The information may indicate a distance (e.g., a minimum distance, target distance, or a maximum distance) from an object to move to record an image. The robot may move along the path or move to a starting location of the path, for example, in response to obtaining the path information. The robot may move autonomously. For example, after receiving input to move along a path, the robot may move to each location on the path without further input from a user. The robot may move with assistance from a teleoperator. The teleoperator may input commands to the robot and the robot may move between locations in the path based on the input. The sensor system102may determine (e.g., based on information received via the location sensor112), that the robot108is at a location of the plurality of locations indicated by the path information. For example, a location sensor of the robot may match the first location (e.g., a pipe at a facility) indicated by the path information. In response to determining that the robot is at the first location, the sensor system102may adjust the OGI camera based on first orientation information associated with the first location. For example, the sensor system102may rotate to a position that allows the pipe to be captured in a view of the camera. Additionally or alternatively, the sensor system102may adjust a zoom or other camera setting of a camera. The sensor system may receive an indication that the orientation of the camera matches the orientation information for the location. For example, the robot may use an actuator to adjust the position of the sensor system to allow an object to be within a field of view of a camera of the sensor system. After adjusting the sensor system, the robot may send a message to the sensor system indicating that the sensor is in the appropriate orientation for the first location. The sensor system may record video or images with a camera (e.g., an RGB camera, an OGI camera, a thermal imaging camera, etc.), for example, in response to receiving the indication that the orientation of the OGI camera matches the orientation information for the location. Or in some cases, video may be recorded continuously, and sections of video may be flagged as relevant in response to such an indication. The sensor system or robot may store the recorded video or images in memory before sending the first video and the second video to a server. FIG.1shows an example robot system100for detection of gas leaks. The robot system100may include a sensor system102, a user device104, a server106, or a robot108. The sensor system102may include a location sensor112, a red green blue (RGB) camera113, an optical gas imaging (OGI) camera114, a cooling system115, or a machine learning (ML) subsystem116. The sensor system102, robot108, server106, and user device104may communicate via the network150. The RGB camera113and the OGI camera114may be located in the sensor system102such that the field of view of the RGB camera113and the field of view of the OGI camera114overlap. For example, the field of view of the RGB camera113may encompass the field of view of the OGI camera114. The RGB camera may include or be a variety of different color cameras (e.g., RGB cameras may include RYB and RYYB cameras), and the term “RGB camera” should not be read as limited to color cameras that include only or all of red, blue, and green channel sensors, e.g., for the present purposes, a RYYB camera should be understood to be a type of RGB camera. The cooling system115may include an air circulation fan, a liquid cooling system, or a variety of other cooling systems. In some embodiments, the robot108may avoid one or more locations based on permissions associated with a user that is logged into the robot108. Various users may be able to log into the robot108to monitor the robot's progress. One or more locations may be restricted from viewing by one or more users. The robot108may obtain information (e.g., a user identification, IP address, MAC address, etc.) indicating a user that is logged into the robot108or the sensor system102. The robot108or sensor system102may determine or obtain location permissions associated with the user. For example, the location permissions may be received from the server106. The robot108may cause, based on the location permissions associated with the user, the robot to skip one or more locations of the plurality of locations. For example, the robot108may determine (e.g., based on the location permissions) that the user logged in is restricted from the second location indicated in the path information. After completing inspection of the first location, the robot108may skip the second location and continue to the third location, for example, in response to determining that the user is restricted from the second location. The robot108, or a remote server, may control one or more components of the sensor system102. The robot108may determine to turn down (e.g., decrease cooling, or turn off) the cooling system115, for example, if (e.g., in response to determining that) the OGI camera114is not in use. The cooling system115may be used to keep the sensor system102below a threshold temperature, for example, so that the sensor system102may use one or more cameras, communicate with the robot108, or perform other functions described herein. The cooling system115may be used to maintain the OGI camera114below a threshold temperature to reduce noise detected by a sensor of the OGI camera114below that of the signal from the scene being imaged. The OGI camera114may use a spectral filter method that enables it to detect a gas compound. The spectral filter may be mounted in front of a detector of the OGI camera114. The detector or filter may be cooled to prevent or reduce radiation exchange between the filter and the detector. The filter may restrict the wavelengths of radiation that pass through to the detector (e.g., via spectral adaptation). For example, the filter may restrict wavelengths outside the range of 3-5 micrometers from reaching the detector. The robot108may turn off one or more components of the sensor system102, for example, while traveling between locations indicated in the path information. The robot108may determine that a distance to a location satisfies a threshold (e.g., is greater than a threshold distance). In response to determining that the distance is greater than the threshold distance, the robot108may send a command to the sensor system102. The command may cause the cooling system115(e.g., or a compressor associated with the cooling system115) of the OGI camera114to turn off. The robot108may move along the path and in response to determining that the robot108is at a location (e.g., or is within a threshold distance of a location) of the plurality of locations, may send a second command to the sensor system102. The second command may cause the cooling system115(e.g., or a compressor associated with the cooling system115) of the OGI camera114to turn on, or some embodiments may modulate an intensity of cooling in cooling systems that support a continuous range of cooling intensities. In some embodiments, the robot108may cause the sensor system102to turn off the location sensor112, cameras113-114, or cooling system115(e.g., or a compressor of the cooling system115) while the robot108is docked at a charging station. The robot108may determine that the robot108is charging a power source. In response to determining that the robot108is charging the power source, the robot108may cause the sensor system102to turn off the cooling system115. In some embodiments, the robot108may cause the cooling system115to turn up (e.g., to decrease the temperature) or turn on, for example, even when one or more cameras113-114are not in use. The robot108may receive (e.g., from the sensor system102), an indication that a temperature within the sensor system102is above a threshold temperature. In response to receiving the indication that a temperature within the sensor system102is above a threshold temperature, the robot108may cause the cooling system to turn on or adjust a temperature of the sensor system102. Alternatively, the sensor system102may monitor its internal temperature and may cause the cooling system115to turn off or on as needed. Additionally or alternatively, the robot108may search for a cool location (e.g., away from the sun, more than a threshold distance away from one or more heat generating objects in a facility, etc.) to wait, for example, in response to receiving the indication that a temperature within the sensor system102is above the threshold temperature. The robot108may determine a shady location to wait. The robot108may record one or more images of an environment surrounding the robot and may determine a shady location based on pixel intensities, contrast, or other factors. Additionally or alternatively, the robot108may determine, based on inputting a portion of the one or more images into a machine learning model, a first location within the environment that receives less sunlight than a second location within the environment. The robot108may move to the first location, and may cause the cooling system115to run, for example, until a temperature of the sensor system102is below the threshold temperature. The cooling system115or a compressor of the cooling system115may be configured to cool a sensor of the OGI camera114. For example, the cooling system115may be able to cool a sensor of the OGI camera114to below 78 degrees Kelvin. The sensor of the OGI camera114may include an infrared detector and a spectral filter. The spectral filter may prevent light outside a range of 3-5 micrometers from reaching the infrared detector. The cooling system115may include a Peltier system (e.g., with a thermoelectric cooler that cools by causing direct current to flow through a semiconductor of the cooling system) or a refrigeration cycle cooling system (e.g., with refrigerant, a compressor, and a condenser). The sensor system102and robot108may periodically send messages (e.g., heartbeat messages) or status updates to each other. The messages may indicate battery levels of the sensor system102or robot108, or other indications such as error messages. In some embodiments, the robot108may determine that more than a threshold amount of time (e.g., 5 seconds, 30 seconds, 5 minutes, 30 minutes, etc.) has transpired since receiving a message (e.g., heartbeat message) from the sensor system102. In response to determining that more than a threshold amount of time has transpired, the robot108may move to a charging station associated with the robot108. Additionally or alternatively, the robot108may (e.g., in response to determining that more than a threshold amount of time has transpired) send a request to the server106. The request may indicate a diagnostic procedure for the server106to perform on the sensor system102. For example, the server106may cause the sensor system102to reboot or reboot one or more components (e.g., the cooling system, etc.). Additionally or alternatively, the robot108may, in response to determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system102, cause a power source of the robot to charge a power source (e.g., a battery) of the sensor system102or cause the sensor system102to reboot. The robot108or sensor system102may determine (e.g., based on information received via the location sensor), that the robot108is at a location of the plurality of locations indicated in the path information. For example, a location sensor of the robot108may match the first location (e.g., a pipe, a tank, or other location) indicated in the path information. In some embodiments, the location sensor may allow triangulation with on-site beacons, may detect magnets (e.g., implanted in the floor, ceiling, walls, etc.), optical codes (e.g., aprilTags, ArUco markers, bar codes, etc.) positioned in the facility, may use satellite navigation (e.g., Galileo, Global Navigation Satellite System, Global Positioning System, etc.). In some embodiments, the robot108may be in a facility (e.g., a factory, other building, etc.) where the robot108or sensor system102are unable to receive GPS signals. The robot108may use simultaneous localization and mapping (SLAM) techniques to determine whether the robot108is at a location indicated by the path information, for example, if the robot108is unable to receive GPS signals. Some embodiments may navigate without building a3D map of the environment (e.g., without SLAM), for example by representing the world as a locally consistent graph of waypoints and edges. In some embodiments, the sensor system102may include a location sensor or the robot108may include a location sensor. A location sensor may be implemented with visual sensing (e.g., via a camera). A location sensor may be implemented without sensing GPS signals or wireless beacons, or some embodiments may use these signals in addition to other location sensors. In some locations or facilities, the robot108may be unable to detect GPS or other wireless communication and may rely on visual SLAM, radar, or lidar systems to navigate the facility. the sensor system102or the robot108may store one or more maps (e.g., one or more electronic maps) associated with a facility. A map may be learned by the robot108or sensor system102via SLAM techniques (e.g., particle filter, extended Kalman filter, GraphSLAM, etc.). The maps may correspond to different configurations that the facility may be in. For example, some facilities may have a first configuration (e.g., layout of different objects) for one time period (e.g., winter season) and a second configuration for a different portion of the year (e.g., summer season). The sensor system102or robot108may use a map corresponding to the layout to assist in navigating the facility to perform an inspection. The path information may be indicated by the map. The map may indicate the locations of one or more aspects of a facility (e.g., location of stairs, pipes, boilers, storage tanks, or a variety of other objects). In some embodiments, the robot108may begin using a first map to navigate, determine that it is unable to recognize its surroundings, and in response may obtain or begin using a different map. For example, the sensor system102or server106may receive, from the robot108, navigation information indicating that the robot108is lost. The navigation information may be generated based on a determination that an environment (e.g., one or more objects in a facility) surrounding the robot108does not match information indicated by the first map. The navigation information may include one or more images of the environment surrounding the robot108, or the last known location of the robot108. In response to receiving the navigation information, the server106or the sensor system102may send a second map (e.g., a second map that is determined to match the navigation information received from the robot108). The second map may include an indication of each location of the plurality of locations. In some embodiments, it may be determined that a map that matches the environment surrounding the robot108does not exist or cannot be obtained. In response to determining that the map cannot be obtained, the robot108may generate a map of the environment (e.g., via SLAM techniques). In response to determining that the robot108is at the location indicated by the path information, the robot may adjust the OGI camera114based on orientation information associated with the location. For example, the sensor system102may rotate to a position that allows an object (e.g., a pipe, storage tank, etc.) to be captured in a view of the cameras113-114. In response to determining that the robot108is at the location, the sensor system102may adjust one or more other settings of the sensor system102. The one or more other settings may include zoom, flash (or persistent onboard illumination), or other settings of the OGI camera114or the RGB camera113. For example, at a first location the sensor system102may zoom in10X if indicated by the path information. Additionally or alternatively, the sensor system102may determine an adjustment to be made based on the orientation (e.g., pose) of the robot108. The adjustment may allow a region of interest at the first location to be captured within a field of view of a camera (e.g., camera113, camera114, etc.) of the sensor system102. For example, to take a picture of a particular pipe, the sensor system102may need to be adjusted depending on whether the robot108is facing the pipe or facing a different direction. As an additional example, the sensor system102may be rotated to point upwards to view a location above the robot108. In some embodiments, the orientation information may include a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot. The robot108may adjust its position to match the indicated roll, pitch, and yaw. The sensor system102may send a request (e.g., to the robot108or the server106) for position information associated with the robot108. The position information may include pose information indicating the position and orientation of a coordinate frame of the robot108. The pose information may include configuration information, which may include a set of scalar parameters that specify the positions of all of the robot's points relative to a fixed coordinate system. The position information may indicate positions of one or more joints of the robot108, a direction that the robot108is facing, or other orientation information associated with the robot108(e.g., roll, pitch, yaw, etc.). In response to sending the request, the sensor system102may receive pose information indicating a pose of the robot108relative to the first location of the plurality of locations. The sensor system102or robot108may adjust, based on the pose information, a position of the sensor system102such that the OGI camera114is facing the first location. The sensor system102may receive an indication that the orientation of the cameras113or114matches the orientation information associated with a location. For example, the robot108may use an actuator to adjust the position of the sensor system102to allow the pipe to be within view of a camera of the sensor system102and after adjusting the sensor system102, the robot108may send a message to the sensor system102indicating that the sensor system102is in the appropriate orientation for the location. The sensor system102may record video or images with a camera (e.g., the RGB camera113, the OGI camera114, a thermal imaging camera, etc.), for example, in response to receiving the indication that the pose (position and orientation) of the camera matches the pose information for the location. The sensor system102may determine that a location should be inspected from multiple view points, for example, based on the path information. The determination to inspect from multiple view points may be made while at the location. For example, the sensor system102may determine that an object does not fit within a single frame of a camera and may take additional pictures to capture the entire object. The determination may be made autonomously by the sensor system102or via teleoperation (e.g., by receiving a command from a teleoperator to take additional pictures/video from additional view points). The determination to take additional pictures may be in response to determining that a leak exists at the location. The sensor system102may determine that a first portion of an object at the location has been recorded by the OGI camera114. In response to determining that the first portion of the object at the first location has been recorded, the sensor system102may cause the robot108to move so that a second portion of the object is within a field of view of the OGI camera114or other camera. The sensor system102may record the second portion of the object with one or more cameras (e.g., the OGI camera114or RGB camera113). For example, a portion of a pipe may be 20 feet long and the sensor system102may cause the robot108to move to multiple locations along the pipe so that the pipe may be inspected using the OGI camera114. In some embodiments, the sensor system102may determine that the robot108is too close to a location or object for purposes of recording the location or object. The sensor system102may send a message to the robot108to move to a location that is suitable for recording an object or location (e.g., a location such that an entire object fits within a field of view of the cameras113-114). For example, the robot108may send a message to cause the sensor system102to record a video (e.g., or one or more images) with the OGI camera114or RGB camera113. The robot108may receive, from the sensor system102, an indication that the robot108is too close to the location or object to be recorded. In response to receiving the indication that the robot108is too close to the location or object, the robot108may move a threshold distance away from the first location. The robot108may cause the sensor system102to record the object or location with the RGB camera113or OGI camera114. In some embodiments, the sensor system102may determine that lighting is inadequate and the sensor system102may illuminate an object with a light (e.g., an onboard light) of the sensor system102while recording an image or video. The sensor system or robot may store the recorded video (which includes images) or images in memory, e.g., caching video for a plurality of waypoints along a route before upload. The sensor system102may send the first video and the second video to the server106. Sending a video or an image may include streaming, caching, and on-board transformation of the video before sending (e.g., detecting possible gas leaks via the robot108and only uploading segments of the video/downressing/compression/cropping to exclude regions not including leaks, etc.). The resolution of an image may be reduced before sending to the server106or storing the image, for example, if no gas leak is detected in the image. The sensor system102or the robot108may transform the captured video and upload or send the transformed version of the captured video to the server106. In some embodiments, the robot108, sensor system102, or server106may use image processing, computer vision, machine learning, or a variety of other techniques to determine whether a gas leak exists at one or more locations indicated by the path information. The sensor system102may determine that a gas leak exists based on one or more images or video recorded by the cameras113-114. The sensor system102may record, via the RGB camera113, an image of the first location, for example, in response to determining that a gas leak exists at the first location. In some embodiments, the reference image may be an image (e.g., a historical image) of the location that was taken during a previous inspection of the location. The sensor system102may obtain, from a database, a historical image of the location. The historical image may depict an absence of a gas leak at the first location on a date that the historical image was taken. Additionally or alternatively, the sensor system102may use Gaussian Mixture Model-based foreground and background segmentation methods, for example, such as those described in “Background Modeling Using Mixture of Gaussians for Foreground Detection: A Survey,” by Thierry Bouwmans, published on Jan. 1, 2008, which is hereby incorporated by reference. In some embodiments, the sensor system102, the robot108, or the server106may use a machine learning model to determine whether a leak exists at a location. For example, the ML subsystem116may use one or more machine learning models described in connection withFIG.7to detect whether a gas leak exists. The sensor system102may determine a reference image that depicts an absence of a gas leak at the location on a date that the reference image was taken. The sensor system102may generate, based on inputting a first image (e.g., taken by the OGI camera114) and the reference image into a machine learning model, a similarity score. The similarity score may be determined by generating vector representations of the reference image and the first image and comparing the vector representations using a distance metric (e.g., Euclidean distance, cosine distance, etc.). The sensor system102may compare the similarity score with a threshold similarity score. The sensor system102may determine that a gas leak exists at the first location, for example, based on comparing the similarity score with a threshold similarity score. One or more images may be preprocessed before being input into a machine learning model. For example, an image may be resized, one or more pixels may be modified (e.g., the image may be converted from color to grayscale), thresholding may be performed, or a variety of other preprocessing techniques may be used prior to inputting the image into the machine learning model. As described herein, inputting an image into a machine learning model may comprise preprocessing the image. In some embodiments the ML subsystem may use a machine learning model to generate a vector representation (e.g., a vector in a latent embedding space, such as a vector generated by a trained autoencoder responsive to an image or series thereof like in video, etc.) of one or more images taken by the OGI camera114. The sensor system102may determine that a gas leak exists or does not exist, for example, based on a comparison of the vector representations. The sensor system102may obtain a plurality of historical images associated with the location (e.g., images taken on previous inspections of the location). The sensor system102or server106may generate, based on inputting the historical images into a machine learning model, a first vector representation of the images. The sensor system102may generate, based on inputting a second plurality of images (e.g., images taken during the current inspection) into a machine learning model, a vector representation of the second plurality of images; and determining, based on a comparison of the first vector with the second vector, that a gas leak exists at the first location. In some embodiments, the sensor system102may send images (e.g., in videos, in video format, or as standalone images) to the server106so that the server106may train one or more machine learning models to detect gas leaks. The images may be labeled by a user that monitors the robot108. For example, the label may indicate whether a gas leak exists or not. The sensor system102may record (e.g., via the OGI camera) a set of images including an image for each location of the plurality of locations indicated by the path information. The sensor system102may generate a label for each image in the set of images. Each label may indicate a location associated with a corresponding image. The sensor system102may send the set of images to the server106for use in training a machine learning model to detect gas leaks. The server106may provide a user interface for users to annotate one or more images received from the sensor system102or robot108. The server106may receive a first set of images, associated with one or more locations indicated by the path information. The server106may generate a webpage including a user interface that is configured to receive input corresponding to one or more portions of an image. The input may be received from a user and may indicate whether or not a gas leak is shown in the image. The server106may receive, via the webpage, input indicate that a gas leak exists in the one or more images. The server106may generate, based on the input, a training data set for a machine learning model. For example, the input received from a user may indicate a label to use for each image in the dataset (e.g., whether a gas leak exists or not). In some embodiments, the machine learning model may be an object detection model with one or more convolutional layers and may be trained to detect whether a plume of gas exists in an image taken by an OGI camera114. The sensor system102may determine, based on output from the machine learning model, a target location within the plume of gas. For example, the machine learning model may an object detection and localization model configured to output the coordinates (in pixel space) of a bounding box surrounding the plume of gas and the sensor system102may use the coordinates to determine a target location (e.g., the center of the bounding box) within the plume of gas. In some embodiments, the sensor system102may include a laser sensor and may use the laser sensor to detect a concentration level of gas at the determined target location. The sensor system102may send a message (e.g., an alert) to the server106, for example, based on determining that the concentration level of gas at the target location exceeds a threshold concentration level. In some embodiments, the machine learning model used to detect whether a gas leak exists may be a Siamese network model. A Siamese network model may be used, for example, if training data is sparse. The Siamese network may be able to determine that an image of a location where a gas leak exists is different from one or more previous images taken of the location when no gas leak existed at the location. The Siamese network may be trained using a dataset comprising image pairs of locations. The image pairs may include positive image pairs (e.g., with two images that show no gas leak at a location). The image pairs may include negative image pairs (e.g., with one image that shows no gas leak at the location and a second image that shows a gas leak at the location). The Siamese network may be trained using a training dataset that includes sets comprising an anchor image, a positive image, and a negative image. The anchor image may be an first image associated with a location when no gas leak is present. The positive image may be a second image (e.g., different from the first image) of the location when no gas leak is present. The negative image may be an image of the location when a gas leak exists at the location. The Siamese network may use a triplet loss function when using sets of images with anchor, positive, and negative images. The Siamese network may be trained to generate vector representations such that a similarity metric (e.g., Euclidean distance, Cosine similarity, etc.) between vector representations of an anchor image and a positive image is smaller than a similarity metric between vector representations of an anchor image and a negative image. The Siamese network may comprise two or more identical subnetworks. The subnetworks may have the same architecture (e.g., types and numbers of layers) and may share the same parameters and weights. In some embodiments, the Siamese network may be implemented as a single network that takes as input two images (e.g., one after the other) and generates vector representations for each of the input images. Weights updated in one subnetwork may be updated in each other subnetwork in the same manner (e.g., corresponding weights in each subnetwork may be the same). Each subnetwork may include a convolutional neural network (e.g., with one or more convolutional or depthwise convolutional layers), an image encoding layer (e.g., a fully connected layer), a distance function layer, or an output layer (e.g., using a sigmoid activation function). In some embodiments, the distance function layer may be used as the output layer, for example, if a contrastive loss function is used. The distance function layer may output a distance value (e.g., Euclidean distance, or a variety of other distance metrics) indicating how similar two images are. For example, a first image of the two images may be an image taken when no gas leak existed at the location, and a second image of the two images may be an image taken during the current inspection. During training, the Siamese network may take as input a first image of a first location (e.g., when no gas leak is present), generate encodings of the first image, then without performing any updates on weights or biases of the network, may take as input a second image. The second image may be an image of a second location or may be an image of the first location when a gas leak is present at the first location. The sensor system102may determine that a leak exists, for example, if the distance corresponding to two images is below a threshold. The Siamese network may use a loss function (e.g., binary cross-entropy, triplet loss, contrastive loss, or a variety of other loss functions). The Siamese network may be trained via gradient descent (e.g., stochastic gradient descent) and backpropagation. The Siamese network may be trained to determine a threshold value (e.g., that is compared with output from the distance function layer) that maximizes (e.g., as determined according to a loss function used by the Siamese network) correct classifications and minimizes incorrect ones. For example, the Siamese network may be trained to determine that an image with a leak (e.g., an image taken by the OGI camera114showing a plume of gas) does not match an image of the same location where no gas leak exists. The Siamese network may output an indication of whether two input images are the same or different. If one image is a historical image with no gas leak present and one image is a new image with a gas leak (e.g., the new image shows a plume of gas), the Siamese network may generate output indicating that the images do not match and the sensor system102may determine that a gas leak is present at the corresponding location. The sensor system102may send an alert or message to the server106or a user device104if the sensor system102determines that a gas leak exists at a location. The sensor system102may make an initial determination of whether there is a gas leak at a location and then send the images to the server106for confirmation. The sensor system102may use a first machine learning model to make the determination and the server106may use a second machine learning model to confirm the determination made by the sensor system102. The machine learning model used by the server106may include more parameters (e.g., weights, layers, etc.) than the machine learning model used by the sensor system102or may obtain higher accuracy, or precision on a test set of images. The server106may request additional images be recorded, for example, if there is a conflict between the determination made by the sensor system102and the server106. For example, additional images may be requested if the sensor system102determines that there is a gas leak at the location and the server106(e.g., using the same images used by the sensor system) determines that no gas leak exists at the location. The server106may receive images (e.g., taken by the OGI camera114) and may determine, based on inputting the images into a machine learning model, that there is no gas leak at the first location. In response to determining that there is no gas leak at the first location, the server106may send an indication that no gas leak was detected in the first plurality of images or the server106may send a request (e.g., to the sensor system102) to record a second plurality of images of the first location. The second plurality of images may be used by the server106to confirm whether or not a gas leak exists at the location. In response to receiving the request to record additional images, the sensor system102may cause the robot108to move a threshold distance, for example, to obtain a different viewpoint of the location. The sensor system102may record additional images or video of the location from the different viewpoint. Using a different view to take additional pictures may enable the sensor system102or server106to confirm whether or not a gas leak is present at the location. The server106may cause the robot to move to the next location of the plurality of locations indicated by the path information, for example, in response to determining that there is no gas leak at the first location. The sensor system102may determine that a gas leak exists at a location by applying motion detection techniques (e.g., background subtraction) to one or more images taken by the OGI camera114. For example, the sensor system102may generate a subtraction image by using a first image as a reference image and subtracting one or more subsequent images (e.g., images recorded after the first image) from the reference image. The sensor system102may generate a binary image by thresholding the subtraction image. The binary image may indicate gas leaks as white pixels. For example, if more than a threshold number of white pixels are detected in the binary image, the sensor system102may determine that a gas leak exists at the location. The server106may process images or video recorded by the sensor system102to determine whether a gas leak is present at a location. The server106may use one or more machine learning models to analyze the first video or the second video to determine whether a gas leak is present at a location. In response to sending images or video to the server106, the sensor system102may receive, from the server106, a message indicating that the images or video do not indicate a gas leak. The message may indicate that no gas leak was detected in the images or video. For example, the server106may input the images or video into a machine learning model described in connection withFIG.4. The sensor system102may cause the robot108to continue along the path, for example, in response to receiving the message from the server106. For example, after receiving a message from the server106indicating that no gas leak was detected at a first location, the sensor system102may cause the robot108to move to a second location indicated by the path information. In some embodiments, the message from the server106may indicate that additional video should be recorded with the optical gas imaging camera114or the RGB camera113. In some embodiments the server106may obtain the images or video, determine that the images or video indicate a gas leak exists, and in response to determining that the first video indicates a gas leak, the server106may cause the sensor system102to record an additional video with the OGI camera114at the corresponding location. In some embodiments, the sensor system102or the robot108may flag an image for review by a user. For example, the sensor system102or the robot108may send an image to a user to confirm whether or not a gas leak exists at a location (e.g., even without the user making the final determination of whether a gas leaks exists). In some embodiments, the server106may oversee or control a plurality of robots. The robots may be located in various places throughout an environment or facility. The server106may determine a robot108that is closest to a location to be inspected and may cause the robot108to inspect the location. The server106may obtain the inspection path information. The server106may obtain location information corresponding to the plurality of robots (e.g., including one or more instances of the robot108). The server106may determine, based on the location information, that the robot108is closer to the first location than other robots of the plurality of robots and in response to determining that the robot108is closer to the first location than other robots of the plurality of robots, may cause the robot108to inspect the first location or may send the inspection path information to the robot108. The sensor system102, the robot108, or the server106may communicate with each other and may share information indicating resource levels or other information. For example, the sensor system102may share information indicating a battery level of the sensor system102, whether one or more components (e.g., the location sensor112, the RGB camera113, the OGI camera114, the cooling system115, or the ML subsystem116) is working properly. The sensor system102may receive information indicating a resource level of the robot108or whether one or more components of the robot108is working properly. The sensor system102may receive battery power from the robot108. For example, if a battery level of the sensor system102is below a threshold and a battery level of the robot108is above a threshold (e.g., the same threshold or a different threshold), the sensor system102may draw power from the battery of the robot108. In some embodiments, the server may determine to use one robot over other robots based on status information associated with the robots or sensor systems associated with corresponding robots of the plurality of robots. The status information may indicate a battery level, temperature or other information about the robot. For example, the server106may receive information indicating that a temperature of the sensor system102of a robot is between a threshold range of temperatures and that the robot is within a threshold distance of the location. In response, the server106may determine to cause the robot to inspect the location. The sensor system102may receive information indicating a battery level of the robot. The sensor system102may determine (e.g., based on the information indicating a battery level of the robot), that the battery level of the robot satisfies a threshold. For example, the battery level may be below a threshold amount required to complete an inspection of a facility. The sensor system102may determine that the robot108may need to conserve battery for moving between locations on the inspection path. The sensor system102may (e.g., one or more locations of the plurality of locations along the path have yet to be recorded. in response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system. In response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system. The robot108may be an bipedal robot, a wheeled robot (such as one with mecanum or omni wheels), a quadruped robot (like Spot™ from Boston Dynamics™ of Boston Massachusetts), a track-drive robot, a articulated robot (e.g., an arm having two, six, or ten degrees of freedom, etc.), a cartesian robot (e.g., rectilinear or gantry robots, robots having three prismatic joints, etc.), Selective Compliance Assembly Robot Arm (SCARA) robots (e.g., with a donut shaped work envelope, with two parallel joints that provide compliance in one selected plane, with rotary shafts positioned vertically, with an end effector attached to an arm, etc.), delta robots (e.g., parallel link robots with parallel joint linkages connected with a common base, having direct control of each joint over the end effector, which may be used for pick-and-place or product transfer applications, etc.), polar robots (e.g., with a twisting joint connecting the arm with the base and a combination of two rotary joints and one linear joint connecting the links, having a centrally pivoting shaft and an extendable rotating arm, spherical robots, etc.), cylindrical robots (e.g., with at least one rotary joint at the base and at least one prismatic joint connecting the links, with a pivoting shaft and extendable arm that moves vertically and by sliding, with a cylindrical configuration that offers vertical and horizontal linear movement along with rotary movement about the vertical axis, etc.), self-driving car, a kitchen appliance, construction equipment, or a variety of other types of robots. The robot may include one or more cameras, joints, servomotors, stepper motor actuators, servo motor actuators, pneumatic actuators, or a variety of other components. In some embodiments, the robot108may include wheels, continuous tracks, or a variety of other means for moving, e.g., with or without a tether. In some embodiments, the robot108may be a drone or other flying robot that is capable of flying to each location indicated in the path information. In some embodiments, the robot108may be a boat or submarine that is capable of inspecting locations underwater. In some embodiments, the robot108may be a drone capable of traveling in outer space and to inspect one or more locations on a space station or other spacecraft. The system100may include one or more processors. Some processors might be in the robot108, in the server106, or in the sensor system102. Instructions for implementing one or more aspects described herein may be executed by the one or more processors. The instructions may be distributed among the robot108, the server106, or the sensor system102. The system100may be compliant with one or more regulations set forth by the Environmental Protection Agency. For example, the system100may be configured to perform inspections that show whether a facility is compliant with requirements in “Oil and Natural Gas Sector: Emission Standards for New, Reconstructed, and Modified Sources” in the Federal Register (“2016 NSPS OOOOa”) or its corresponding 2018 Proposal. FIG.2Ashows an example sensor system201, that may include any component or perform any function described above in connection with the sensor system102ofFIG.1. The sensor system201may include an OGI camera205(e.g., which may be the same as the OGI camera114ofFIG.1), an RGB camera210(e.g., which may be the same as the RGB camera113ofFIG.1), and a case215to house the components of the sensor system201. The sensor system201may include any component discussed in connection withFIG.8below. FIG.2Bshows an exploded view of the sensor system201with an exploded view of the RGB camera210and the OGI camera205. The sensor system201may include a printed circuit board220. The printed circuit board220may include a central processing unit, a graphics processing unit, a vision processing unit (e.g., a microprocessor designed to accelerate machine vision tasks), or a variety of components such as those described in connection withFIG.8below. The vision processing unit may be suitable for executing machine learning models or other computer vision techniques such as those described in connection withFIG.1orFIG.7. The sensor system102described in connection withFIG.1may include a vision processing unit to assist in performing one or more machine learning operations described. The sensor system201may include a cooling system225(e.g., which may be the same as the cooling system115ofFIG.1). The sensor system201may include one or more batteries216. FIG.3shows an example robot301(e.g., the robot108) with the sensor system320(e.g., the sensor system102) attached. The robot301may include sensors310and sensors312. The sensors310-312may include RGB cameras, infrared cameras, depth sensing cameras, or a variety of other sensors. The cameras may be stereo cameras that provide black and white images and video. The robot301may be communicatively coupled with the sensor system320. The robot301may be able to cause the sensor system322to rotate up/down or from side to side via a joint322or other means, e.g., joint322may include three degrees of freedom independently actuated by the robot with servo motors of stepper motors (e.g., pitch, roll, and yaw). The robot301may include one or more legs315for moving in an environment, each being actuated by two or more such actuators in some cases. FIG.4shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process400may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At405, robot system100(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At410, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may cause the robot to move along the path. At415, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may determine that the robot is at a first location of the path. The determination may be made based on information received via a location sensor of the robot or a location sensor of the sensor system. At420, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the robot to adjust one or more cameras based on first orientation information associated with the first location. At425, robot system100(e.g., the sensor system102or the server106(FIG.1) or computing system800(FIG.8)) may receive an indication that the orientation of the camera matches the orientation information associated with the first location. At430, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may record video or one or more images of the first location with one or more cameras (e.g., via a camera of the sensor system). The sensor system may record a first video with the OGI camera and a second video with the RGB camera, for example, in response to receiving an indication OGI camera matches the orientation information. At435, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may store one or more recorded images or videos of the first location in memory. It is contemplated that the actions or descriptions ofFIG.4may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG.4may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.4. FIG.5shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process500may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At505, the robot108(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At510, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may determine (e.g., based on information received via a location sensor) that a distance between a location of the robot and a first location on the inspection path is greater than a threshold distance. At515, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may cause a compressor of an OGI camera to turn off, for example, in response to determining that the distance is greater than the threshold distance. At520, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may move along the path indicated by the inspection path information. At525, the robot108(e.g., using one or more components in system100(FIG.1) or computing system800(FIG.8)) may cause the compressor to turn on, for example, in response to determining that the robot is within a threshold distance of the first location (e.g., within 10 feet of the first location, etc.). At530, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the sensor system to record video. At535, the robot108(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the sensor system to store the video in memory. It is contemplated that the actions or descriptions ofFIG.5may be used with any other embodiment of this disclosure, as is generally the case with the various features described herein. In addition, the actions and descriptions described in relation toFIG.5may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.5. FIG.6shows an example flowchart of the actions involved in inspecting for gas leaks with a robot. For example, process600may represent the actions taken by one or more devices shown inFIGS.1-3orFIG.8. At605, robot system100(e.g., using one or more components in system100(FIG.1) or computing system800via I/O interface850and/or processors810a-810n(FIG.8)) may obtain path information indicating a path for a robot (e.g., the robot108ofFIG.1) to travel. The path information may indicate a plurality of locations along the path to inspect with a sensor system (e.g., the sensor system102ofFIG.1). Each location may be associated with orientation information that indicates an orientation that the sensor system should be moved to for recording a video. For example, an orientation of the sensor system at a particular location may allow the sensor system to capture video or images of a gas line or other area of interest. At610, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810nand system memory820(FIG.8)) may cause the robot to move along the path. At615, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n, I/O interface850, and/or system memory820(FIG.8)) may determine that the robot is at a first location of the path. The determination may be made based on information received via a location sensor of the robot or a location sensor of the sensor system. At620, robot system100(e.g., using one or more components in system100(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may cause the robot to adjust one or more cameras based on first orientation information associated with the first location. At625, robot system100(e.g., the sensor system102or the server106(FIG.1) or computing system800(FIG.8)) may record images of a first location indicated by the path information. The sensor system may record video or one or more images of the first location with one or more cameras (e.g., via a camera of the sensor system). The sensor system may record a first video with the OGI camera and a second video with the RGB camera, for example, in response to receiving an indication OGI camera matches the orientation information. At630, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may determine (e.g., based on one or more images or videos recorded by the sensor system) that a gas leak exists at the first location. For example, the sensor system102may use background subtraction between two images recorded by the OGI camera to detect movement (e.g., of gas) in the images. Additionally or alternatively, the sensor system may input one or more images into a machine learning model (e.g., as described inFIG.4) to detect whether gas is present at a location. At635, robot system100(e.g., the sensor system102(FIG.1) and/or computing system800via one or more processors810a-810n(FIG.8)) may send an indication that a gas leak exists at the first location. It is contemplated that the actions or descriptions ofFIG.6may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG.6may be done in alternative orders or in parallel to further the purposes of this disclosure. For example, each of these actions may be performed in any order, in parallel, or simultaneously to reduce lag or increase the speed of the system or method, none of which is to suggest that any other description is limiting. Furthermore, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-3could be used to perform one or more of the actions inFIG.6. One or more models discussed above may be implemented (e.g., in part), for example, as described in connection with the machine learning model742ofFIG.7. With respect toFIG.7, machine learning model742may take inputs744and provide outputs746. In one use case, outputs746may be fed back to machine learning model742as input to train machine learning model742(e.g., alone or in conjunction with user indications of the accuracy of outputs746, labels associated with the inputs, or with other reference feedback and/or performance metric information). In another use case, machine learning model742may update its configurations (e.g., weights, biases, or other parameters) based on its assessment of its prediction (e.g., outputs746) and reference feedback information (e.g., user indication of accuracy, reference labels, or other information). In another example use case, where machine learning model742is a neural network and connection weights may be adjusted to reconcile differences between the neural network's prediction and the reference feedback. In a further use case, one or more neurons (or nodes) of the neural network may require that their respective errors are sent backward through the neural network to them to facilitate the update process (e.g., backpropagation of error). Updates to the connection weights may, for example, be reflective of the magnitude of error propagated backward after a forward pass has been completed. In this way, for example, the machine learning model742may be trained to generate results (e.g., response time predictions, sentiment identifiers, urgency levels, etc.) with better recall, accuracy, and/or precision. In some embodiments, the machine learning model742may include an artificial neural network. In such embodiments, machine learning model742may include an input layer and one or more hidden layers. Each neural unit of the machine learning model may be connected with one or more other neural units of the machine learning model742. Such connections can be enforcing or inhibitory in their effect on the activation state of connected neural units. Each individual neural unit may have a summation function which combines the values of one or more of its inputs together. Each connection (or the neural unit itself) may have a threshold function that a signal must surpass before it propagates to other neural units. The machine learning model742may be self-learning or trained, rather than explicitly programmed, and may perform significantly better in certain areas of problem solving, as compared to computer programs that do not use machine learning. During training, an output layer of the machine learning model742may correspond to a classification, and an input known to correspond to that classification may be input into an input layer of machine learning model during training. During testing, an input without a known classification may be input into the input layer, and a determined classification may be output. For example, the classification may be an indication of whether an action is predicted to be completed by a corresponding deadline or not. The machine learning model742trained by the ML subsystem116may include one or more embedding layers at which information or data (e.g., any data or information discussed above in connection withFIGS.1-3) is converted into one or more vector representations. The one or more vector representations of the message may be pooled at one or more subsequent layers to convert the one or more vector representations into a single vector representation. The machine learning model742may be structured as a factorization machine model. The machine learning model742may be a non-linear model and/or supervised learning model that can perform classification and/or regression. For example, the machine learning model742may be a general-purpose supervised learning algorithm that the system uses for both classification and regression tasks. Alternatively, the machine learning model742may include a Bayesian model configured to perform variational inference, for example, to predict whether an action will be completed by the deadline. The machine learning model742may be implemented as a decision tree and/or as an ensemble model (e.g., using random forest, bagging, adaptive booster, gradient boost, XGBoost, etc.). FIG.8is a diagram that illustrates an exemplary computing system800in accordance with embodiments of the present technique. Various portions of systems and methods described herein, may include or be executed on one or more computer systems similar to computing system800. Further, processes and modules described herein may be executed by one or more processing systems similar to that of computing system800. Computing system800may include one or more processors (e.g., processors810a-810n) coupled to system memory820, an input/output I/O device interface830, and a network interface840via an input/output (I/O) interface850. A processor may include a single processor or a plurality of processors (e.g., distributed processors). A processor may be any suitable processor capable of executing or otherwise performing instructions. A processor may include a central processing unit (CPU) that carries out program instructions to perform the arithmetical, logical, and input/output operations of computing system800. A processor may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions. A processor may include a programmable processor. A processor may include general or special purpose microprocessors. A processor may receive instructions and data from a memory (e.g., system memory820). Computing system800may be a units-processor system including one processor (e.g., processor810a), or a multi-processor system including any number of suitable processors (e.g.,810a-810n). Multiple processors may be employed to provide for parallel or sequential execution of one or more portions of the techniques described herein. Processes, such as logic flows, described herein may be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating corresponding output. Processes described herein may be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Computing system800may include a plurality of computing devices (e.g., distributed computer systems) to implement various processing functions. I/O device interface830may provide an interface for connection of one or more I/O devices860to computing system800. I/O devices may include devices that receive input (e.g., from a user) or output information (e.g., to a user). I/O devices860may include, for example, graphical user interface presented on displays (e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor), pointing devices (e.g., a computer mouse or trackball), keyboards, keypads, touchpads, scanning devices, voice recognition devices, gesture recognition devices, printers, audio speakers, microphones, cameras, or the like. I/O devices860may be connected to computing system800through a wired or wireless connection. I/O devices860may be connected to computing system800from a remote location. I/O devices860located on remote computer system, for example, may be connected to computing system800via a network and network interface840. Network interface840may include a network adapter that provides for connection of computing system800to a network. Network interface840may facilitate data exchange between computing system800and other devices connected to the network. Network interface840may support wired or wireless communication. The network may include an electronic communication network, such as the Internet, a local area network (LAN), a wide area network (WAN), a cellular communications network, or the like. System memory820may be configured to store program instructions870or data880. Program instructions870may be executable by a processor (e.g., one or more of processors810a-810n) to implement one or more embodiments of the present techniques. Instructions870may include modules of computer program instructions for implementing one or more techniques described herein with regard to various processing modules. Program instructions may include a computer program (which in certain forms is known as a program, software, software application, script, or code). A computer program may be written in a programming language, including compiled or interpreted languages, or declarative or procedural languages. A computer program may include a unit suitable for use in a computing environment, including as a stand-alone program, a module, a component, or a subroutine. A computer program may or may not correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one or more computer processors located locally at one site or distributed across multiple remote sites and interconnected by a communication network. System memory820may include a tangible program carrier having program instructions stored thereon. A tangible program carrier may include a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may include a machine readable storage device, a machine readable storage substrate, a memory device, or any combination thereof. Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like. System memory820may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors810a-810n) to cause the subject matter and the functional operations described herein. A memory (e.g., system memory820) may include a single memory device and/or a plurality of memory devices (e.g., distributed memory devices). I/O interface850may be configured to coordinate I/O traffic between processors810a-810n, system memory820, network interface840, I/O devices860, and/or other peripheral devices. I/O interface850may perform protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory820) into a format suitable for use by another component (e.g., processors810a-810n). I/O interface850may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard. Embodiments of the techniques described herein may be implemented using a single instance of computing system800or multiple computer systems800configured to host different portions or instances of embodiments. Multiple computer systems800may provide for parallel or sequential processing/execution of one or more portions of the techniques described herein. Those skilled in the art will appreciate that computing system800is merely illustrative and is not intended to limit the scope of the techniques described herein. Computing system800may include any combination of devices or software that may perform or otherwise provide for the performance of the techniques described herein. For example, computing system800may include or be a combination of a cloud-computing system, a data center, a server rack, a server, a virtual server, a desktop computer, a laptop computer, a tablet computer, a server device, a client device, a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a vehicle-mounted computer, or a Global Positioning System (GPS), or the like. Computing system800may also be connected to other devices that are not illustrated, or may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided or other additional functionality may be available. Those skilled in the art will also appreciate that while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computing system800may be transmitted to computing system800via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network or a wireless link. Various embodiments may further include receiving, sending, or storing instructions or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present disclosure may be practiced with other computer system configurations. In block diagrams, illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated. The functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted, for example such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g., within a data center or geographically), or otherwise differently organized. The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine readable medium. In some cases, third party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may be provided by sending instructions to retrieve that information from a content delivery network. The reader should appreciate that the present application describes several disclosures. Rather than separating those disclosures into multiple isolated patent applications, applicants have grouped these disclosures into a single document because their related subject matter lends itself to economies in the application process. But the distinct advantages and aspects of such disclosures should not be conflated. In some cases, embodiments address all of the deficiencies noted herein, but it should be understood that the disclosures are independently useful, and some embodiments address only a subset of such problems or offer other, unmentioned benefits that will be apparent to those of skill in the art reviewing the present disclosure. Due to costs constraints, some features disclosed herein may not be presently claimed and may be claimed in later filings, such as continuation applications or by amending the present claims. Similarly, due to space constraints, neither the Abstract nor the Summary sections of the present document should be taken as containing a comprehensive listing of all such disclosures or all aspects of such disclosures. It should be understood that the description and the drawings are not intended to limit the disclosure to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. Further modifications and alternative embodiments of various aspects of the disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description and the drawings are to be construed as illustrative only and are for the purpose of teaching those skilled in the art the general manner of carrying out the disclosure. It is to be understood that the forms of the disclosure shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed or omitted, and certain features of the disclosure may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the disclosure. Changes may be made in the elements described herein without departing from the spirit and scope of the disclosure as described in the following claims. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,”, “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing steps A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor1performs step A, processor2performs step B and part of step C, and processor3performs part of step C and step D), unless otherwise indicated. Similarly, reference to “a computer system” performing step A and “the computer system” performing step B can include the same computing device within the computer system performing both steps or different computing devices within the computer system performing steps A and B. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. Unless otherwise indicated, statements that “each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every. Limitations as to sequence of recited steps should not be read into the claims unless explicitly specified, e.g., with explicit language like “after performing X, performing Y,” in contrast to statements that might be improperly argued to imply sequence limitations, like “performing X on items, performing Y on the X'ed items,” used for purposes of making claims more readable rather than specifying sequence. Statements referring to “at least Z of A, B, and C,” and the like (e.g., “at least Z of A, B, or C”), refer to at least Z of the listed categories (A, B, and C) and do not require at least Z units in each category. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. Features described with reference to geometric constructs, like “parallel,” “perpendicular/orthogonal,” “square”, “cylindrical,” and the like, should be construed as encompassing items that substantially embody the properties of the geometric construct, e.g., reference to “parallel” surfaces encompasses substantially parallel surfaces. The permitted range of deviation from Platonic ideals of these geometric constructs is to be determined with reference to ranges in the specification, and where such ranges are not stated, with reference to industry norms in the field of use, and where such ranges are not defined, with reference to industry norms in the field of manufacturing of the designated feature, and where such ranges are not defined, features substantially embodying a geometric construct should be construed to include those features within 15% of the defining attributes of that geometric construct. The terms “first”, “second”, “third,” “given” and so on, if used in the claims, are used to distinguish or otherwise identify, and not to show a sequential or numerical limitation. As is the case in ordinary usage in the field, data structures and formats described with reference to uses salient to a human need not be presented in a human-intelligible format to constitute the described data structure or format, e.g., text need not be rendered or even encoded in Unicode or ASCII to constitute text; images, maps, and data-visualizations need not be displayed or decoded to constitute images, maps, and data-visualizations, respectively; speech, music, and other audio need not be emitted through a speaker or decoded to constitute speech, music, or other audio, respectively. Computer implemented instructions, commands, and the like are not limited to executable code and can be implemented in the form of data that causes functionality to be invoked, e.g., in the form of arguments of a function or API call. To the extent bespoke noun phrases (and other coined terms) are used in the claims and lack a self-evident construction, the definition of such phrases may be recited in the claim itself, in which case, the use of such bespoke noun phrases should not be taken as invitation to impart additional limitations by looking to the specification or extrinsic evidence. In this patent filing, to the extent any U.S. patents, U.S. patent applications, or other materials (e.g., articles) have been incorporated by reference, the text of such materials is only incorporated by reference to the extent that no conflict exists between such material and the statements and drawings set forth herein. In the event of such conflict, the text of the present document governs, and terms in this document should not be given a narrower reading in virtue of the way in which those terms are used in other materials incorporated by reference. The present techniques will be better understood with reference to the following enumerated embodiments: 1. A method comprising: obtaining inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system, wherein each location of the plurality of locations is associated with orientation information indicating an orientation that the RGB camera and the OGI camera should be placed in to record a video; causing the robot to move along the path; determining, based on information received via the location sensor, that the robot is at a first location of the plurality of locations; in response to determining that the robot is at the first location, causing the robot to adjust the OGI camera based on first orientation information associated with the first location; receiving, from the robot, an indication that an orientation of the OGI camera matches the first orientation information; in response to receiving the indication that the orientation of the OGI camera matches the first orientation information, recording a first video with the OGI camera and a second video with the RGB camera; and sending the first video and the second video to the server. 2. The method of any of the preceding embodiments, further comprising: receiving, from the server, a message indicating that the first video and the second video do not indicate a gas leak; and in response to receiving the message, causing the robot to move to a second location of the plurality of locations. 3. The method of any of the preceding embodiments, further comprising: receiving information indicating a battery level of the robot; determining, based on the information indicating a battery level of the robot, that the battery level of the robot satisfies a threshold; determining that one or more locations of the plurality of locations along the path have yet to be recorded; and in response to determining that the battery level of the robot satisfies the threshold and that one or more locations of the plurality of locations along the path have yet to be recorded, causing the sensor system to stop receiving energy from a battery of the robot and begin receiving energy from a battery of the sensor system. 4. The method of any of the preceding embodiments, wherein the OGI camera comprises an indium antimonide detector. 5. The method of any of the preceding embodiments, wherein the OGI camera comprises a quantum well infrared photodetector. 6. The method of any of the preceding embodiments, wherein causing the robot to adjust the OGI camera based on first orientation information associated with the first location cause operations further comprising: sending, to the robot, a request for pose information associated with the robot; in response to sending the request, receiving pose information indicating a pose of the robot relative to the first location of the plurality of locations; and adjusting, based on the pose information, a position of the sensor system such that the OGI camera is facing the first location. 7. The method of any of the preceding embodiments, wherein the inspection path information is associated with a first map obtained by the robot, the method further comprising: receiving, from the robot, navigation information indicating that the robot is lost, wherein the navigation information is generated based on a determination that an environment surrounding the robot does not match the first map; and in response to receiving the navigation information, sending a second map with an indication of each location of the plurality of locations to the robot. 8. The method of any of the preceding embodiments, further comprising: determining, based on an image received from the robot, the second map from a plurality of maps corresponding to the facility. 9. The method of any of the preceding embodiments, wherein the inspection path information is associated with a first map obtained by the robot, wherein the first map indicates one or more objects in the facility, the method further comprising: receiving, from the robot, an image of an environment surrounding the robot and navigation information indicating that the robot is lost, wherein the navigation information is generated based on a determination that an environment surrounding the robot does not match the first map; and in response to determining that the image does not correspond to any map of a plurality of maps associated with the facility, causing the robot to generate a new map of the facility. 10. The method of any of the preceding embodiments, wherein the first orientation information comprises a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot. 11. The method of any of the preceding embodiments, wherein causing the robot to move along the path comprises: receiving information indicating a user that is logged into the robot; determining location permissions associated with the user; and causing, based on the location permissions associated with the user, the robot to skip a second location of the plurality of locations. 12. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot. 13. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, sending a request to the server, wherein the request indicates a diagnostic procedure for the server to perform on the sensor system. 14. The method of any of the preceding embodiments, wherein the instructions for causing the robot to adjust the OGI camera based on first orientation information associated with the first location, when executed, cause the one or more processors to perform operations further comprising: determining that a first portion of an object at the first location has been recorded by the OGI camera; and in response to determining that a first portion of an object at the first location has been recorded, causing the robot to move so that a second portion of the object is within a field of view of the OGI camera. 15. The method of any of the preceding embodiments, wherein the server performs operations comprising: receiving the first video and the second video; determining that the first video indicates a gas leak; and in response to determining that the first video indicates a gas leak, causing the sensor system to record an additional video with the OGI camera at the first location. 16. The method of any of the preceding embodiments, wherein the server performs operations comprising: receiving, from the sensor system, the first video; determining, based on inputting the first video into a machine learning model, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, sending an indication that no gas leak was detected in the first plurality of images to the sensor system and a request indicating that the robot should move to a second location of the plurality of locations. 17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot. 18. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a temperature of the sensor system is between a threshold range of temperatures. 19. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a battery level of the robot is above a threshold battery level. 20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-19. 21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-19. 22. A system comprising means for performing any of embodiments 1-19. Embodiments May Include 1. A method comprising: receiving inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system; determining, based on information received via the location sensor, that a distance between a location of the robot and a first location of the plurality of locations is greater than a threshold distance; in response to determining that the distance is greater than the threshold distance, sending a first command to the sensor system, wherein the first command causes the compressor of the OGI camera to turn off; moving along the path; in response to determining that the robot is at a first location of the plurality of locations, sending a second command to the sensor system, wherein the second command causes the compressor of the OGI camera to turn on; causing the sensor system to record a first video with the OGI camera and a second video with the RGB camera; and causing the sensor system to send the first video and the second video to the server. 2. The method of any of the preceding embodiments, further comprising: determining that the robot is charging a power source of the robot; and in response to determining that the robot is charging the power source, causing the sensor system to turn off the compressor. 3. The method of any of the preceding embodiments, further comprising: receiving, from the sensor system, an indication that a temperature within the sensor system is above a threshold temperature; and in response to receiving the indication that a temperature within the sensor system is above a threshold temperature, causing the compressor to turn on. 4. The method of any of the preceding embodiments, further comprising: in response to receiving the indication that a temperature within the sensor system is above a threshold temperature, recording a picture of an environment surrounding the robot; determining, based on inputting a portion of the picture into a machine learning model, a first location within the environment that receives less sunlight than a second location within the environment; moving to the first location; and causing the compressor to run until a temperature of the sensor system is below the threshold temperature. 5. The method of any of the preceding embodiments, wherein a first field of view of the RGB camera encompasses a second field of view of the OGI camera. 6. The method of any of the preceding embodiments, wherein the compressor is configured to cool the sensor to below 78 degrees Kelvin. 7. The method of any of the preceding embodiments, wherein the sensor comprises an infrared detector and a spectral filter, wherein the spectral filter that prevents light outside a range of 3-5 micrometers from reaching the infrared detector. 8. The method of any of the preceding embodiments, wherein the cooling system comprises a second compressor, a refrigerant, and a condenser. 9. The method of any of the preceding embodiments, wherein the cooling system comprises a thermoelectric cooler that cools the sensor system by causing direct current to flow through a semiconductor of the cooling system. 10. The method of any of the preceding embodiments, wherein recording a first video with the OGI camera and a second video with the RGB camera comprises: adjusting, based on orientation information associated with the first location, an orientation of the robot, wherein the orientation information comprises an indication of a pitch, a yaw, and a roll of the robot; and in response to adjusting the orientation, recording the first video and the second video. 11. The method of any of the preceding embodiments, further comprising: receiving information indicating a first battery level of the sensor system; determining, based on the information indicating a first battery level of the sensor system, that the first battery level of the sensor system is above a first threshold; determining that a second battery level of the robot is below a second threshold; and based on determining that the first battery level is above the first threshold and the second batter level is below the second threshold, changing a source of power of the robot from a battery of the robot to the battery of the sensor system. 12. The method of any of the preceding embodiments, further comprising: generating, via a camera of the robot, an image of an environment adjacent to the robot; determining, based on a comparison of the image with a first map, that the first map does not match the environment; and in response to determining that the first map does not match the environment, sending, to the server, the image and a request for an updated map that corresponds to the image. 13. The method of any of the preceding embodiments, further comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot. 14. The method of any of the preceding embodiments, further comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, causing the sensor system to reboot. 15. The method of any of the preceding embodiments, further comprising: in response to determining that more than a threshold amount of time has transpired, causing a battery of the robot to charge a battery of the sensor system. 16. The method of any of the preceding embodiments, wherein causing the sensor system to record a first video with the OGI camera and a second video with the RGB camera comprises: causing the sensor system to record the first video with the OGI camera; receiving, from the sensor system, an indication that the robot is too close to the first location; in response to receiving the indication that the robot is too close to the first location, moving a threshold distance away from the first location; and causing the sensor system to record the second video with the RGB camera. 17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot. 18. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a temperature of the sensor system is between a threshold range of temperatures. 19. The method of any of the preceding embodiments, wherein sending the inspection path information to the robot is performed in response to: receiving information indicating that a battery level of the robot is above a threshold battery level. 20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-21. 21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-21. 22. A system comprising means for performing any of embodiments 1-21. Embodiments May Include 1. A method comprising: obtaining inspection path information indicating a path for the robot to travel, and a plurality of locations along the path to inspect with the sensor system, wherein each location of the plurality of locations is associated with orientation information indicating an orientation that the RGB camera and the OGI camera should be placed in to record a video; causing the robot to move along the path; determining, based on information received via the location sensor, that the robot is at a first location of the plurality of locations; in response to determining that the robot is at the first location, causing the robot to adjust the OGI camera based on first orientation information associated with the first location; recording a first plurality of images with the OGI camera; determining, based on the first plurality of images, that a gas leak exists at the first location; and sending, to the server, an indication that a gas leak exists at the first location. 2. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: in response to determining that a gas leak exists at the first location, recording, via the RGB camera, an image of the first location; and sending the image to the server. 3. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: determining a reference image associated with the first location; generating a subtracted image by subtracting a first image of the first plurality of images from the reference image; and determining, based on the subtracted image, that a gas leak exists at the first location. 4. The method of any of the preceding embodiments, wherein determining a reference image comprises: obtaining, from a database, a historical image of the first location, wherein the historical image depicts an absence of a gas leak at the first location on a date that the historical image was taken. 5. The method of any of the preceding embodiments, wherein determining a reference image comprises: selecting, from the first plurality of images, an image that was recorded before any other image of the first plurality of images. 6. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: determining a reference image, wherein the reference image depicts an absence of a gas leak at the first location on a date that the reference image was taken; generating, based on inputting a first image of the first plurality of images and the reference image into a machine learning model, a similarity score; comparing the similarity score with a threshold similarity score; and based on comparing the similarity score with a threshold similarity score, determining that a gas leak exists at the first location. 7. The method of any of the preceding embodiments, wherein determining that a gas leak exists at the first location comprises: obtaining a second plurality of historical images associated with the first location; generating, based on inputting the first plurality of images into a machine learning model, a first vector representation of the first plurality of images; generating, based on inputting the second plurality of images into a machine learning model, a second vector representation of the second plurality of historical images; and determining, based on a comparison of the first vector with the second vector, that a gas leak exists at the first location. 8. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: recording, via the OGI camera, a set of images, wherein the set of images comprises an image for each location of the plurality of locations; generating a label for each image in the set of images, wherein each label indicates a location associated with a corresponding image and indicates whether a gas leak was detected in the corresponding image; and sending the set of images to the server for use in training a machine learning model. 9. The method of any of the preceding embodiments, wherein the instructions for determining that a gas leak exists at the first location, when executed, cause the one or more processors to perform operations comprising: inputting the first plurality of images into a machine learning model, that a plume of gas exists at the first location; and in response to inputting the first plurality of images into the machine learning model, classifying one or more objects in the first plurality of images as a plume of gas. 10. The method of any of the preceding embodiments, further comprising: determining, based on output from the machine learning model and based on the first plurality of images, a target location within the plume of gas; sensing, via a laser sensor of the sensor system, a concentration level of gas at the target location; and based on determining that the concentration level of gas at the target location exceeds a threshold concentration level, sending an alert to the server. 11. The method of any of the preceding embodiments, wherein the server is configured to perform operations comprising: receiving, from the sensor system, a first set of images, wherein the first set of images is associated with one or more locations of the first plurality of locations; generating a webpage comprising a user interface, wherein the user interface is configured to receive input on one or more portions of an image; receive, via the webpage, input corresponding to one or more images of the first set of images, wherein the input indicates a gas leak exists in the one or more images; and generating, based on the input and the first set of images, a training data set for a machine learning model. 12. The method of any of the preceding embodiments, wherein the first orientation information comprises a vector indicating a roll of the robot, a pitch of the robot, and a yaw of the robot. 13. The method of any of the preceding embodiments, wherein causing the robot to move along the path, when executed, cause the one or more processors to perform operations further comprising: receiving information indicating a user that is logged into the robot; determining location permissions associated with the user; and causing, based on the location permissions associated with the user, the robot to skip a second location of the plurality of locations. 14. The method of any of the preceding embodiments, wherein the server is configured to perform operations comprising: receiving, from the sensor system, the first plurality of images; determining, based on inputting the first plurality of images into a machine learning model, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, sending an indication that no gas leak was detected in the first plurality of images and a request to record a second plurality of images of the first location. 15. The method of any of the preceding embodiments, wherein the instructions, when executed, cause the one or more processors to perform operations further comprising: in response to receiving the request, recording a second plurality of images of the first location; determining, based on the second plurality of images, that there is no gas leak at the first location; and in response to determining that there is no gas leak at the first location, causing the robot to move to a second location of the plurality of locations. 16. The method of any of the preceding embodiments, further comprising: in response to receiving the request, causing the robot to move closer to the first location; recording a second plurality of images of the first location; and sending the second plurality of images to the server. 17. The method of any of the preceding embodiments, wherein the server performs operations comprising: obtaining the inspection path information; obtaining location information corresponding to a plurality of robots, wherein the plurality of robots comprises the robot; determining, based on the location information, that the robot is closer to the first location than other robots of the plurality of robots; and in response to determining that the robot is closer to the first location than other robots of the plurality of robots, sending the inspection path information to the robot. 18. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, sending a request to the server, wherein the request indicates a diagnostic procedure for the server to perform on the sensor system. 19. The method of any of the preceding embodiments, wherein the robot is configured to perform operations comprising: determining that more than a threshold amount of time has transpired since receiving a heartbeat message from the sensor system; and in response to determining that more than a threshold amount of time has transpired, moving to a charging station associated with the robot. 20. A tangible, non-transitory, machine-readable medium storing instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising those of any of embodiments 1-19. 21. A system comprising: one or more processors; and memory storing instructions that, when executed by the processors, cause the processors to effectuate operations comprising those of any of embodiments 1-19. 22. A system comprising means for performing any of embodiments 1-19. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. | 113,774 |
11858123 | DETAILED DESCRIPTION A machine tool hand1according to an embodiment of the present disclosure will be described below with reference to the drawings. As shown inFIGS.1and2, the machine tool hand1according to this embodiment includes a body portion2and a pair of hand members3. The body portion2includes: a distal end portion4that is formed in a cuboid block shape; and a cylindrical shaft portion5that is disposed closer to the base end than the distal end portion4is, and that is for mounting the body portion2so as to be attachable to and detachable from a spindle100of a machine tool. As shown inFIG.4, the shaft portion5is provided with a flow path7that extends along a central axis from an opening6at one end. When the shaft portion5is attached to the spindle100, the opening6is connected to a coolant-liquid supply path110provided in the spindle100, and a coolant liquid supplied from the coolant-liquid supply path110is allowed to flow into the flow path7. The pair of hand members3are disposed on both sides of the distal end portion4of the body portion2, and are individually attached to the body portion2so as to be pivotable about two parallel pivot axes (axes) A, B extending along a plane orthogonal to the central axis of the shaft portion5. As shown inFIG.4, the flow path7provided in the body portion2is branched into branched flow paths8in two directions in the distal end portion4. The branched flow paths8are individually branched in the directions toward the individual hand members3. The branched flow paths8have discharge ports9opening at two locations on each side surface of the distal end portion4. The total of the opening areas of the four branched flow paths8and the discharge ports9thereof is set to be substantially equal to the cross-sectional area of the flow path7in the shaft portion5. Gears10that rotate about the pivot axes A, B are fixed to the respective hand members3. The gears10fixed to the two hand members3mesh with each other. In addition, a coil spring (urging member)11, which urges the hand members3in the closing direction by means of an elastic restoring force, is bridged between the two hand members3. In this embodiment, opposing side surfaces of the pair of hand members3are disposed so as to face the positions where these opposing side surfaces close off the discharge ports9provided in the distal end portion4of the body portion2. In a state shown inFIG.1in which the pair of hand members3are closed, the side surfaces of the individual hand members3are disposed closest to the discharge ports9, at positions where the side surfaces substantially close off the individual discharge ports9. In a state shown inFIG.2in which the pair of hand members3are opened, the side surfaces of the individual hand members3are slightly inclined so as to be separated from the discharge ports9as a result of the individual hand members3pivoting about the pivot axes A, B, and gaps are formed between the body portion2and the hand members3. The individual hand members3are constantly urged in the closing direction by means of the elastic restoring force of the coil spring11. In other words, in a state in which a workpiece (object) W is not held between the hand members3and the coolant liquid is not supplied into the flow path7, the side surfaces of the individual hand members3close the discharge ports9in the body portion2by means of the elastic restoring force of the coil spring11. Then, when the coolant liquid is supplied into the flow path7, the individual hand members3are pushed by the coolant liquid discharged from the discharge ports9and are caused to pivot in directions in which the hand members3are separated from each other against the elastic restoring force of the coil spring11. The pressure of the coolant liquid discharged from the discharge ports9is, for example, 0.4 MPa to 2 MPa. In the figures, reference sign12indicates an end stopper that abuts against the hand members3at the open positions and that restricts further movement of the hand members3by means of elastic deformation. In addition, reference sign13indicates an elastic member, such as a sponge, that comes into contact with the workpiece W and is elastically deformed when the workpiece W is gripped. The operation of the thus-configured machine tool hand1according to this embodiment will be described below. In the description hereinafter, since the machine tool hand1according to this embodiment is formed symmetric laterally, there will be described only for one side, and the description for the other side will be omitted. The machine tool hand1according to this embodiment is, for example, stored as one of the tools in a tool magazine of a machine tool and is attached to the spindle100in place of another tool, as needed or periodically, by an automatic tool changing device (not shown) provided in the machine tool. For example, when it is necessary to supply a workpiece W to a chuck provided on a rotary table, to change the orientation of a workpiece W, or the like, the machine tool hand1is attached to the spindle100. Then, in a state in which the machine tool hand1is attached to the spindle100, with the hand members3thereof directed downward, the machine tool hand1is raised or lowered in accordance with an elevating operation of the spindle100. As shown inFIG.1, in the case of gripping a workpiece W disposed vertically below the machine tool hand1, the machine tool hand1is lowered by lowering the spindle100, and as shown inFIG.2, the pair of hand members3are brought into an open state by supplying the coolant liquid from the spindle100. Then, in a state in which the workpiece W is disposed between the pair of hand members3, as shown inFIG.3, the pair of hand members3are brought into a closed state by stopping the supply of the coolant liquid, thus allowing the workpiece W to be gripped. In this case, with the machine tool hand1according to this embodiment, when the pair of hand members3are brought into the open state, the coolant-liquid supply path110utilized during processing of the workpiece W is utilized, and the coolant liquid is supplied into the flow path7in the body portion2from the coolant-liquid supply path110through the opening6. By doing so, the coolant liquid supplied into the flow path7in the body portion2is discharged from the discharge ports9opening in two directions and presses the side surfaces of the individual hand members3, thereby allowing the individual hand members3to pivot about the pivot axes A, B. Because the flow path7in the body portion2is open to the outside at the discharge ports9, the coolant liquid discharged from the discharge ports9flows along the side surfaces of the hand members3while pressing the side surfaces of the hand members3, and flows downward from the distal ends of the hand members3. The coolant liquid that has flowed down is recovered in a coolant tank (not shown) and is allowed to circulate in the same manner as in the processing. Although fine chips or the like become mixed into the coolant liquid at this time, in this embodiment, the coolant liquid for opening and closing the hand members3is released to the outside from the discharge ports9of the flow path7in the body portion2, and the hand members3are caused to pivot by the flow energy during the release. Therefore, because the coolant liquid is not supplied into a closed space, there is an advantage in that it is possible to perform handling, etc. of a workpiece W without causing a malfunction due to chip clogging even when fine chips or the like become mixed into the coolant liquid. In addition, with the machine tool hand1according to this embodiment, the pair of hand members3are constantly urged in the closing direction by means of the elastic restoring force of the coil spring11; thus, the hand members3can be easily brought into the closed state by stopping the supply of the coolant liquid. In addition, because the gears10meshing with each other are fixed to the pair of hand members3, there is an advantage in that it is possible to place the gripping position of the workpiece W at the center as a result of the pair of hand members3pivoting by the same angles distributed from the center. In addition, when the pair of hand members3are brought into the open state, as shown inFIGS.4and5, the hand members3pivot about the pivot axes A, B, and the side surfaces pushed by the coolant liquid are each disposed so as to form an angle larger than 90° relative to the discharge direction. With this configuration, a guide means for the coolant liquid is formed, and the coolant liquid is allowed to flow along the side surfaces while being guided toward the distal ends of the hand members3. As a result, there is an advantage in that portions for gripping the workpiece W can be washed with the coolant liquid every time the pair of hand members3are opened. In addition, in this embodiment, the flow path7in the body portion2is branched into the four branched flow paths8, and the cross-sectional area of each of the branched flow paths8and the discharge ports9is set to be ¼ of the cross-sectional area of the flow path7. Thus, the cross-sectional area of flow does not change between the flow path7and the branched flow paths8, and loss can be reduced. Note that, in this embodiment, the side surfaces of the hand members3are pushed by the coolant liquid itself discharged from the discharge ports9. Alternatively, as shown inFIGS.6and7, spherical metal (e.g., steel) bodies (movable bodies)15movable in the branched flow paths8in the vicinity of the discharge ports9may be disposed. As shown inFIG.6, the spherical body15has a diameter size smaller than an internal diameter of the branched flow path8; and a gap of which the coolant liquid flows out is formed between the spherical body15and an inner surface of the branched flow path8. In the figures, reference sign16indicates a cylindrical body that forms an inner wall of the branched flow path8in a movable range of the spherical body15. Reference sign17indicates an abutting member embedded in the side surface of each of the hand members3opposing to the discharge ports9. In order to suppress abbration of the body portion2and the hand members3caused by contacts with the spherical bodies15when the body portion2and the hand members3are made lighter in weight by using an aluminum alloy or the like, the cylindrical body16and the abutting member17are constituted by a hard material, for example, steel. The abutting members17are secured to the hand material3in a replaceable matter by engagement and disengagement of screws. Reference sign12indicates an end stopper that restricts a pivot angle about pivot axis A or B of the hand members3. The end stopper12according to this embodiment stops pivoting of the hand members3by abutting the hand members3at a pivot angle where a discharge amount of the spherical body15from the discharge port9is equal to or less than a hemisphere. InFIGS.6and7, illustrations of the coil spring11are omitted. When the coolant liquid is supplied into the flow path7from the coolant-liquid supply path110, a portion of the coolant liquid branched to the two branched flow paths8pushes the spherical body15to move the spherical body15to a side of the discharge port9, and the remainder is discharged to the outside from the discharge port9via the gap between the spherical body15and the inner surface of the branched flow path. Each of the spherical bodies15pushed by means of a pressure of the coolant liquid pushes the corresponding abutting member17of the corresponding hand member3by protruding from the corresponding discharge port9to the outside, thus causing the hand members3to pivot about the respective pivot axes A, B. By doing so, distal ends of the two hand members3are opened. Since the center of the spherical body15is held at the inner side than the discharge port9, a minimum value of the coolant-liquid communication cross-sectional area formed by the gap between the inner surface of the branched flow path8and the spherical body15remains constant without depending on the pivot angle of the corresponding hand member3. That is, since the pressure applied to the spherical body15from the coolant liquid does not vary by the pivot angle of the corresponding hand member3, the hand members3can be opened with a large stable force only by increasing the pressure of the coolant liquid. As a result, there are advantages in that a spring rigidity can be improved, a gripping force for the workpiece W can be improved, and therefore a stable handling can be realized. In addition, by providing the gap around the spherical body15, the coolant liquid can be discharged to the outside from the corresponding discharge port9without confining the coolant liquid in the flow path7. By doing so, it is possible to prevent malfunction caused by clogging with chips or the like contained in the coolant liquid. In addition, an amount of the cooling liquid discharged from the discharge port9, of the coolant liquid supplied into the flow path7, can be restricted by means of the corresponding spherical body15. By doing so, there are advantages in that it can be prevented to discharge a large amount of the coolant liquid when the hand members3are opened, and an operator can easily confirm the state of the hand members3and the workpiece W visually. In the above aspect, a spherical shape was illustrated as a shape of the movable object, however, it is not limited thereto. An arbitrary shape, such as a columner shape, a shell-type shape, or the like, may be employed. Even when the columner shape or the shell-type shape is employed, a posture change of the movable body can be suppressed, and the minimum value of the communication cross-sectional area formed by the gap between the inner surface of the branched flow path8and the movable body15can remain substantially constant. Note that the gears10for synchronizing pivoting of the two hand members3need not be provided in this embodiment. In addition, although a case in which the two hand members3are both caused to pivot about the pivot axes A, B has been illustrated as an example, one of the hand members3may be fixed and only the other one may be caused to pivot. This configuration also allows gripping of the workpiece W. In addition, although the machine tool hand1including the pair of hand members3has been illustrated as an example in this embodiment, alternatively, the present invention may be applied to a machine tool hand1including three or more hand members3. In addition, although four branched flow paths8are provided in this embodiment, alternatively, it suffices that two or more branched flow paths8be provided for causing two or more hand members3to pivot. In addition, the hand members3may have any shape, and a plurality of machine tool hands1having different shapes may be prepared according to the shapes of workpieces W to be gripped. In addition, although the coolant liquid is supplied in the case of opening the hand members3in this embodiment, alternatively, as shown inFIGS.8to10, the coolant liquid may be supplied when the hand members3are to be closed. In this case, a coil spring (urging member)14that constantly urges the hand members3in the opening directions may be employed. Next, a machine tool hand20according to another embodiment of this disclosure will be described below. In the description of this embodiment, identical reference signs are assigned to portions having configurations common to those in the machine tool hand1according to the above embodiment, and a description thereof will be omitted. As shown inFIG.11, the machine tool hand20according to this embodiment includes a body portion21, a movable member22, and a pair of hand members23. A distal end portion24of the body portion21is provided with a groove25that is arranged at the center of the width direction and that extends along the central axis of the shaft portion5from an end portion opposite to the shaft portion5. The flow path7, which extends along the central axis of the shaft portion5from the opening6at one end of the shaft portion5, is provided with the discharge port9at a groove bottom of the groove25. The movable member22is disposed inside the groove25and is supported so as to be movable only in a direction along the central axis of the shaft portion5with respect to the distal end portion24. A coil spring (urging member) which is not shown is bridged between the movable member22and the distal end portion24. The movable member22is constantly urged in a direction closer to the discharge port9by means of the elastic restoring force of the coil spring. The movable member22is moved closer to the groove bottom, thus being located at a position where the discharge port9is closed. The pair of hand members23are supported to the movable member22in such a way that each proximal end thereof can pivot about a pivot axis (axis) C perpendicular to the central axis of the shaft portion5. At an intermediate position of each of the hand members23, an elongated guide hole26is provided so as to obliquely extend toward a direction of the distal end of the corresponding hand member23and a direction away from the central axis of the shaft portion5. A pin27that is provided on a side surface of the distal end portion24and that extends parallel to the pivot axis C, is inserted into each of the guide holes26. The operation of the thus-configured machine tool hand20according to this embodiment will be described below. When the coolant liquid is supplied into the flow path7, the coolant liquid flows and is discharged toward the movable member22from the discharge port9. The discharged coolant liquid is sprayed into the movable member22that is being disposed at the position where the discharge port9is closed. By doing so, the movable member22is moved toward a direction away from the discharge port9along the central axis of the shaft portion5. In response to the movement of the movable member22, each of the hand members23connected to the movable member22is moved along a longitudinal direction of the corresponding guide hole26with respect to the corresponding pin27secured at the distal end portion24. By doing so, as shown inFIG.12, the hand members23pivot about the pivot axis C, and thus the distal ends of the hand members23are opened. When the supply of the coolant liquid is stopped, the movable member22is pulled back toward the direction closer to the discharge port9by means of the elastic restoring force of the coil spring. By doing so, in response to the movement of the movable member22, each of the hand members23pivots in a direction in which the distal end is closed. Thereby, it is possible to grip the workpiece W between the hand members23. Thus, similarly to the machine tool hand1according to the first embodiment, the machine tool hand20according to this embodiment can also open and close the pair of hand members23by means of the coolant liquid discharged from the discharge port9. There is an advantage in that, even when fine chips or the like become mixed into the coolant liquid, by releasing the coolant liquid to the outside from the discharge port9, it is possible to perform handling, etc. of the workpiece W without causing a malfunction due to chip clogging. In addition, it is possible to cleanse the hand members23with the coolant liquid discharged from the discharge port9flowing along the surface of the hand members23every time the pair of hand members23are opened. Note that, in this embodiment, the case where the hand members23are opened by discharging the coolant liquid was illustrated. Reversely, the hand members23may be closed by discharging the coolant liquid. | 19,758 |
11858124 | DETAILED DESCRIPTION FIG.1shows a quick-change system1in a three-dimensional view. The quick-change system1here includes two receiving parts2and two exchange tools3, each receiving part2having a receiving opening4for receiving the respective exchange tool3. The exchange tools3are designed as gripper jaws. For each receiving part2, there is also one ball catch5for temporarily fixing the respective exchange tool3on the receiving part2. Each ball catch5includes a sleeve6closed on one side (seeFIG.2), in which a spring-mounted press ball7is arranged. The press balls7can each be manually engaged in a ball-receiving opening8on the exchange tool3and can also be disengaged from these again, for example to be able to quickly change to a differently dimensioned exchange tool. The exchange tool3is moved manually towards the receiving part2in the engagement direction E for engaging with the receiving part2. The exchange tool stops3a,3bprovided here on the exchange tool3are brought to bear above and below the receiving part2and the exchange tool3is pushed into the receiving opening4until the press ball7engages in the ball-receiving opening8. FIG.1shows, on the left side, the state before the exchange tool3engages with the receiving part2and, on the right side, the state of the exchange tool3after it has engaged with the receiving part2. The receiving parts2are arranged on a holder12. The receiving parts2are mounted on the holder12by means of screw connections13here. FIG.2shows a single ball catch5as an enlarged view in a sectional view. The sleeve6, which is closed on one side, can be seen, in which a press ball7, which is spring-mounted by means of a spring element11, is arranged. The sleeve6has an axis of symmetry S, the press ball7being displaceable along the axis of symmetry S. A helical compression spring is provided here as the spring element11, but other types of spring elements such as conical springs, disc springs, elements formed from an elastomer and the like can also be used. Above the ball catch5, a section of the exchange tool3can be seen, here in the area of a ball-receiving opening8in the form of a blind hole. The ball-receiving opening8has a central axis M which, when the receiving part2and the exchange tool3are in an engaged state, runs parallel to the axis of symmetry S, as shown here. The axis of symmetry S and the central axis M run parallel to one another at a distance A in the range from 1 to 3.5 mm in the engaged state. The selection of the distance A has a direct influence on the holding force between the exchange tool3and the receiving part2in the engaged state. At least in the contact area8a, in which an edge of the ball-receiving opening8would come into contact with the surface of the press ball7, the ball-receiving opening8has an edge break. The choice of the size of the edge break also has an influence on the holding force between the exchange tool3and the receiving part2in the engaged state. The final constructive design for the holding force required in the respective application can be determined on the basis of a few tests. FIG.3shows a receiving part2′ for a further quick-change system (cf.FIG.4) in a three-dimensional view.FIG.4shows a section through a part of such a further quick-change system (only one gripper jaw shown here, arrangement of two receiving parts2′ also provided here as inFIG.1on a holder12) with receiving part2′ according toFIG.3in a three-dimensional view. The receiving part2′ here has two ball catches5for the temporary fixation of an exchange tool3′ on the receiving part2′. Each ball catch5includes a sleeve6closed on one side (seeFIG.2), in which a spring-mounted press ball7is arranged. The press balls7can each be manually engaged with a ball-receiving opening8on the exchange tool3′ and can also be disengaged from these again, for example to be able to quickly change to a differently dimensioned exchange tool. The receiving part2′ also has two receiving openings4. The exchange tool3′ is moved manually towards the receiving part2′ in the engagement direction E in order to engage with the receiving part2′. The exchange tool stops3a′,3b′ provided here on the exchange tool3′ are brought to bear above and below the receiving part2′ in the area of the receiving openings4and the exchange tool3′ is pushed on until the press balls7of the ball catches5engage with the corresponding ball-receiving openings (not visible here) on exchange tool3′ and a stop9is reached. The exchange tool3′ here comprises a gripper jaw30which can be mounted separately via a further screw connection13′. To form a further quick-change system, two such receiving parts2′ are mounted on a holder12(seeFIG.1) by means of the screw connections13and each provided with an exchange tool3′ (each comprising a gripper jaw30). Only two exemplary embodiments for the formation of quick-change systems according to the disclosure are shown here, but further embodiments as described above are possible according to the present disclosure. REFERENCE NUMERALS 1Quick-change system2,2′ Receiving part3,3′ Exchange tool3a,3b,3a′,3b′ Exchange tool stop30Gripper jaw4Receiving opening5Ball catch6Sleeve7Press ball8Ball-receiving opening8aContact area9Stop11Spring element12Holder13,13′ Screw connectionA DistanceS Axis of symmetryM Central axisE Engagement direction | 5,353 |
11858125 | DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG.1shows an embodiment of a rotary actuator1according to the disclosure. The actuator1can also be a linear actuator. The actuator1has an actuator body2. The actuator1includes a drive element configured to be connected to a tool. In a rotary actuator, the drive element is a rotary shaft5and the rotary actuator1is configured to rotate the rotary shaft5. The actuator1can thus rotate the tool via the rotary shaft5. In the embodiment ofFIG.1, an end6of a rotary shaft5protrudes from the actuator body2and the tool40or tool interface45can engage the protruding end of the rotary shaft5so as to operatively connect the tool to the actuator. The actuator1includes a drive10configured to rotate the rotary shaft5. The actuator body2can include a casing12and the drive10is disposed in the casing12. The actuator1is configured to be actuated via an input received at a connector14(FIG.2A). In an exemplary embodiment, the actuator1has a connector14for an air hose. The connector14enables a connection to a positive and/or negative pressure generator via an air hose. As shown inFIG.2A, the actuator1can include two connectors14for air hose connections. The drive of the actuator1can, for example, be actuated via positive or negative pressure supplied to the actuator1via the air hose connected to the connector14. In an example, the drive10is a pneumatically driven rack and pinion system which rotates the rotary shaft5in response to an input being received at the connector14. The drive10can also be actuated in an alternate manner, for example, electrically, whereby the connector14is configured to receive power and/or electrical signals for the actuator1. In the case of a rotary actuator1, the drive10can cause the rotary shaft5to assume any rotational position. In the embodiment shown inFIG.1, the actuator1includes two angle adjustors15. The first angle adjustor15can set a starting angle and the second angle adjustor15can set an end angle so that the angle adjustors15set a range of rotation for the rotary shaft5. In the case of a linear or translational actuator, the drive10can move the drive element between two end positions. The drive10can be configured to have the drive element assume any position between the two end positions. FIG.2Ashows the actuator ofFIG.1in a front section view. The drive10for rotating the rotary shaft5is arranged in the casing12. The rotary shaft5defines a through passage7. The through passage7has a first opening8configured to connect to a positive and/or negative pressure generator and a second opening9configured to connect to a tool attached to the actuator1. As a result of the through passage, negative and/or positive pressure can be supplied to the tool40via the through passage7. The tool or tool interface has a port for receiving negative and/or positive pressure from the rotary shaft5. The port of the tool or tool interface can, in particular, be formed integrally with a connector of the tool and/or tool interface for connecting to the actuator1, in particular the rotary shaft5of the actuator1. Thus, it is possible to supply the tool rotated by the actuator1with positive and/or negative pressure through the connection to the actuator1instead of through external lines which can, for example, become snagged or tangled, especially when rotating. The rotary shaft5and the through passage7can be connected to a positive and/or negative pressure source via the first opening8. FIG.2Bshows the actuator ofFIG.1in a side section view along line A-A through the rotary shaft5. FIG.3Ashows an actuator1attached to a gripper head20. The gripper head20is connected to a gripper head mount30. In the embodiment shown inFIG.3A, the gripper head20is mounted on the gripper head mount30via fasteners31. The gripper head mount30can further include a source of negative and/or positive pressure, for example, a vacuum generator or a compressor. The gripper head20has an internal channel29leading from a connection to the negative and/or positive pressure source in the gripper head mount30to a tool section22of the gripper head. The actuator1is mounted on the tool section22. A tool40is mounted on the actuator1and the actuator1is configured to rotate the tool40. In the embodiment shown inFIG.3A, the tool40is a suction cup tool. The suction cup tool includes a tool plate41and has suction cups48mounted therein. The suction cups48can be supplied with a vacuum via an internal tool plate channel42. When the tool plate41is mounted on the actuator1, the internal tool plate channel42connects to the through passage7of the rotary shaft5which, in turn, is connected to the internal channel29of the gripper head20and thus connected to the vacuum generator in the gripper head mount30. FIG.3Bshows the gripper head assembly50ofFIG.3Ain an exploded view. An output of the positive and/or negative pressure source32in the gripper head mount30connects the internal channel29of the gripper head20when assembled. In the embodiment shown inFIG.3B, the gripper head20includes a tool section22which defines a tool attachment region, here formed by two attachment projections24. The actuator1is fixed to each attachment projection24via fasteners25. The through passage7of the actuator1can be connected to the internal channel29of the gripper head20via a supply line70. The through passage7of the actuator1can also be connected to the internal channel in other ways, such as via a direct connection from an outlet of the internal channel29in the tool section22to the through passage7in the rotary shaft5of the actuator1. A rotary hub45is attached to the rotary shaft5and rotates therewith. The rotary hub45acts as a connector for the tool plate41. An exemplary embodiment of the gripper head ofFIG.3Bis shown inFIG.4A. The gripper head20includes a mounting section21for attaching the gripper head20to a gripper head mount30, such as a quick-change mount. The gripper head20can, for example, be attached to the gripper head mount30via threaded fasteners. The mounting section21defines a mounting plane. The gripper head20further has a tool section22. In the embodiment shown inFIG.4A, the tool section22includes two projections24for holding an actuator1. The two projections conjointly define a gap therebetween. FIG.4Bshows a rear, section view of the gripper head20ofFIG.4A. In the embodiment shown inFIG.4B, the gripper head20defines openings through which fasteners can be inserted for fixing the gripper head20to a gripper head mount30. A first port26is defined in the mounting section21. An internal channel29extends from the first port26at the mounting section21to a second port27at the tool section22, connecting the first port26to the second port27. A negative and/or positive generator in the gripper head mount30can be operatively connected to the tool section22via the internal channel29. The gripper head20can further include a pass-through passage35for, for example, a cable, tube or wire such as an air line or electrical line. The gripper head20ofFIG.4Bhas two pass-through passages35. The pass-through passages35have a first opening36at or near the mounting section21and a second opening37at or near the tool section22. In the embodiment ofFIG.4B, a pass-through passage35is arranged on each side of the internal channel29, which is centrally located in the gripper head20. According to an exemplary embodiment, the connector14of actuator1is connected to a pneumatic hose which is guided through the pass-through passage35which opens at a second opening37near the connector14when the actuator1is attached to the tool section22of the gripper head20. The actuator1may also have two connectors14wherein each is connected to a pneumatic hose guided through a pass-through channel35. In other embodiments, where the actuator1is electrically driven, the power and/or control cables can be guided through the pass-through channel(s) and connected to the connector(s)14. The guidance through the pass-through channel35eliminates or at least reduces the potential of a hose and/or cable for the actuator becoming tangled or snagged. FIG.4Cshows a cross-section of the gripper head body ofFIG.4Bat line B-B. In the embodiment shown inFIG.4C, the internal channel29has a rectangular cross-section and the pass-through passages35have a circular cross section. As further shown inFIG.3B, the internal channel29of the gripper head20can be connected to the first opening8of the through passage7of the rotary shaft5via a supply line70. The supply line70includes a vacuum fitting72, a vacuum tube71, and a pneumatic fitting73. The supply line70can also, for example, be a hose. The supply line70connects the second port27of the gripper head20to the first opening8of the through passage7of the rotary shaft5. As a result, negative and/or positive pressure can be conveyed from a positive/negative pressure source, for example disposed in the gripper head mount30, to the tool via the internal channel29of the gripper head and the through passage7of the rotary shaft5. The aforementioned configuration eliminates the need for an external hose connection between the tool and a positive and/or negative pressure source which can move relative to each other causing the hose to become snagged or tangled when the tool moves relative to the gripper head. Additionally, the elimination of the hose also prevents the hose from becoming snagged on an external object, a processing station to which it is delivering a work piece, or on the work piece itself. An embodiment of the tool plate41is shown inFIG.5. The tool plate ofFIG.5includes three receptacles44configured to receive and hold suction cups48. In particular, the receptacles44can receive a stem49of a suction cup48. The tool plate41can further define a hub pocket43into which the rotary hub45can be inserted for attaching the tool plate41to the actuator1. In such embodiments, a rotary hub45or the like can act as a tool interface for the actuator1. In the embodiment ofFIG.5, the tool plate41is fixed to the rotary hub45via fasteners inserted through fastening openings in the tool plate41and engaging the rotary hub45. As a result, the tool plate41rotates with the rotary hub45which in turn is rotated by the rotary shaft5of the actuator1. The implementation of a tool interface, for example in the form of a rotary hub45, can facilitate a quick and efficient switching of tools attached to the actuator1. FIG.6Ashows the tool plate41ofFIG.5in a top plan view.FIG.6Ashows tool plate channels42which connect the receptacles44to the positive and/or negative pressure source via the second opening9of the rotary shaft5. In an embodiment, the tool plate defines two tool plate channels42running parallel. The extra tool plate channel42provides a redundancy in case the first tool plate channel42becomes blocked. FIG.6Bis a section view of the tool plate41ofFIG.5. As shown inFIG.6B, the tool plate41includes a tool plate channel42configured to supply each of the suction cups48mounted in the receptacles44with negative and/or positive pressure. The tool plate41has a port for receiving negative and/or positive pressure. In alternate configurations, the tool plate41may be equipped with tools other than suction cups48and may receive negative and/or positive pressure via the port. In further embodiments, the tool may, for example, be electrically driven and the port serves as an input for receiving power and/or commands. FIG.7shows a rotary hub45. The rotary hub45is configured to be attached to the end of the rotary shaft5and to have the tool plate41attached thereto serving as an interface between the rotary shaft of the actuator1and the tool plate41. Using a rotary hub45as an interface between the rotary shaft5and the tool plate41can enable a quicker retooling of a gripper head, that is, the tool plate41can be removed and replaced with a different tool plate equipped with different tools or differently scaled tools. The rotary hub ofFIG.7includes a rotary hub passage46. The rotary hub passage46accommodates the tool end6of the rotary shaft5. When the rotary hub45is attached to the tool end6of the rotary shaft5, the rotary hub45rotates with the rotary shaft5. The rotary hub45further includes connectors47for connecting a tool plate41. The connectors47can, for example, be bores with an inner thread for receiving a threaded fastener. The tool plate41can also be attached to the rotary hub45via a snap-on connection or other connection, especially for a quick and efficient retooling. The rotary hub passage46can facilitate a direct connection between the tool end6of the rotary shaft5and tool plate channel42. The rotary hub passage46can also act as a connecting passage between the second opening9of the through passage7and the tool plate channel42. FIG.8shows a further embodiment, wherein a gripper head20has an actuator1attached thereto. The gripper head20and actuator can be configured as described with respect toFIG.3A. As in the embodiment shown inFIG.3A, the actuator1shown is equipped with a tool plate41furnished with suction cups48. The gripper head20further has a second tool80attached thereto. In the shown embodiment, the second tool80is a pinch gripper. The gripper head20can include the internal channel29so that the first tool40or tool plate41can be supplied with negative and/or positive pressure via the internal channel29of the gripper head20and the through passage7of the rotary shaft5. The first and second tools can also be swapped. FIG.9shows the embodiment ofFIG.8in an exploded view.FIG.10shows an embodiment of a pinch gripper81having gripper jaws83. The pinch gripper81further has a pinch gripper input82. The pinch gripper input82can connect the pinch gripper81to a source of negative and/or positive pressure. Alternatively, the pinch gripper input82can be configured to receive an electrical signal and power for controlling and operating the pinch gripper81. FIG.11Ashows an embodiment of a tool plate60which is configured to have two tools attached thereto. The main body of the tool plate60ofFIG.11Ais configured similar to the tool plate41ofFIG.5. The tool plate60further includes a tool attachment section61for a second tool80, here a pinch gripper81. The tool attachment section61defines openings through which fasteners can be inserted so as to engage the pinch gripper81and hold the same on the tool plate60. The second tool80can, however, also be attached to the tool plate60via any suitable attachment. In other embodiments, a second tool80other than a pinch gripper81can be attached to the tool attachment section61of the tool plate60.FIG.11Bshows the tool plate60including the tool plate channel62connecting the receptacles64to the through passage7of the actuator1.FIG.11Cshows a section view of the tool plate60along line A-A ofFIG.11B. FIG.12shows an embodiment of a gripper head20having an actuator1attached to the tool section22.FIG.12illustrates an additional axis provided by the actuator1. For example, a first six axes can be provided by a robotic arm. An additional axis, in the described example, a seventh axis, is provided by the actuator1mounted on the gripper head20. Further, as a result of the combination of the internal channel29in the gripper head20and the through passage in the rotary shaft, no air hoses need to be connected to the tool plate in order to provide the same with negative and/or positive pressure. The omission of such air hoses reduces or eliminates the chances of the gripper head20becoming tangled or caught on an object. This also enables the gripper head20to be used and perform movements in environments with limited space. The efficiency of the robotic arm and the processes carried out by the arm can also be improved in that the tool40can be repositioned quickly without requiring additional movements of the robotic arm or wrist. FIG.13shows a comparison of a routine, which can be performed by the gripper head20ofFIG.12, to a corresponding routine by a conventional gripper head. With a conventional gripper head system, the gripper head initially grips the workpiece, for example, from a storage location or from a processing station processing the workpiece, and then moves the workpiece to a regrip station. At the regrip station, the workpiece is set down and released in a certain position. The gripper head is then repositioned via the robot to regrip the workpiece so that it can then be brought back to the processing station or another processing station gripped in such a manner so as to be properly provided for processing or further processing by the processing station. However, with the additional axis provided by the actuator according to the disclosure, it is possible to grip the workpiece and have the robot make a short movement to an area where the workpiece can be rotated via the actuator. As a result, the workpiece can be retrieved and repositioned for processing without needing to set the workpiece down for regripping in the repositioned manner. It is understood that the foregoing description is that of the preferred embodiments of the invention and that various changes and modifications may be made thereto without departing from the spirit and scope of the invention. LIST OF REFERENCE NUMERALS 1Actuator2Actuator body5Rotary shaft6Tool end of rotary shaft7Through passage8First opening9Second opening10Drive12Casing14Connector15Angle adjuster20Gripper head21Mounting section22Tool section24Attachment projections25Fasteners26First port27Second port28Connecting tube29Internal channel30Gripper head mount31Fasteners32Positive/negative pressure source opening/output35Pass-through passage36First opening of pass-through passage37Second opening of pass-through passage40Tool41Tool plate42Tool plate channel43Hub pocket44Receptacles45Rotary hub/Tool interface46Rotary hub passage47Rotary hub connectors48Suction cup49Suction cup stem50Gripper head assembly60Tool plate61Tool attachment section62Tool plate channel63Hub pocket64Receptacles70Supply line71Vacuum tube72Vacuum fitting73Pneumatic fitting80Second tool81Pinch gripper82Pinch gripper input83Pinch gripper jaws | 18,159 |
11858126 | DETAILED DESCRIPTION In the following a robot joint will be described that has a fluid-tight sealing of the joint, and also a robot comprising at least one such robot joint, a system comprising the robot and a method for sealing such a robot joint. The herein described robot joint is provided with a seal that is inflatable. The inflatable seal is provided in a joint gap spacing a first part and a second part of the robot joint, with a relative movement in between. With such a robot joint, it is possible to meet conflicting requirements found for a single joint seal, thus to both seal the joint and allow the parts of the joint to move in relation to each other, because the inflatable seal can be made to work in different modes. In a first mode, the inflatable seal provides operational protection. For example, when the robot operates in a meat processing factory, the inflatable seal shall protect against external contamination, such as incoming blood splashes. In this case, the contact pressure between the inflatable seal and a sealing face of the second part shall be set just at minimum level, so as to minimize the wear of the inflatable seal, while still providing a sealing function of the joint gap. In the second mode, the inflatable seal provides wash-down protection. During that phase, the joint seal shall protect against high pressure/temperature water jet. Since the robot is kept stationary during wash-down, that is, no part is moving, there is no concern regarding the shear load (friction force) on the inflatable seal. In this case, ideally, the contact pressure between the inflatable seal and the sealing face of the second part shall be set as high as possible, so as to maximize sealing capability. Thus, by minimizing the axial load of the inflatable seal of the joint during robot operation, the seal life time may be prolonged. Obviously, a fixed-profile seal could hardly fulfil the requirements for different operating modes, while by introducing an inflatable seal, the sealing becomes flexible. FIGS.1A and1Billustrate an industrial robot100with six (6) axes1-6, hereafter referred to as “robot100”. The robot100is a programmable robot that has six degrees of freedom (DOF). Each axis comprises a driving mechanism (not shown) for driving an arm or a wrist. The driving mechanism comprises a driving motor, for example a brushless DC motor. A transmission comprising speed reducers and/or gearboxes transmits the torque from the driving motor, via an output shaft of the driving motor, to the joint20of the axis. The joint20comprises a first part22and second part24(FIG.3). The first part22is typically arranged stationary in relation to the driving motor of the axis, and the second part24is arranged in relation to the outgoing shaft of the driving motor, and rotates in accordance with the rotation of the arm or wrist of the axis. Thus, the second part24will then rotate in relation to the first part22when the joint is operated. The first part22and the second part24are thus rotatable in relation to each other. Between the first part22and second part24there is a joint gap26(FIG.4), and an inflatable seal10is arranged to seal the joint gap26. Thus, the inflatable seal10is arranged to seal the first part22and the second part24. In the robot100of1A and1B, each joint is sealed with an inflatable seal. That is, the joint gap of the joint20aof the first axis1is sealed with a first inflatable seal10a, the joint gap of the joint20bof the second axis2is sealed with a second inflatable seal10b, the joint gap of the joint20cof the third axis3is sealed with a third inflatable seal10c, the joint gap of the joint20dof the fourth axis4is sealed with a fourth inflatable seal10d, the joint gap of the joint20eof the fifth axis5is sealed with a fifth inflatable seal10eand the joint gap of the joint20fof the sixth axis6is sealed with a sixth inflatable seal10f. It should be understood that a robot may comprise more or less joints than six, and thus more or less inflatable seals than six. It should also be understood that the number of inflatable seals may be less than the number of joints i.e., not every joint needs to comprise an inflatable seal. FIGS.2A and2Billustrate inflatable seals10according to two different embodiments of the invention, that can be used as the inflatable seals10a-10finFIGS.1A-1B, in isolation. The inflatable seal10may be produced from elastomers with high modulus of elasticity and considerable elongation. For example, the inflatable seal10can be made of silicone, styrene butadiene rubber or ethylene propylene. The material may be provided with an agent preventing bacterial and microbial growth, to meet the needs of hygienic applications. The inflatable seal10may be produced by joining together extruded or molded sections. The inflatable seal10can thus be made into one, single, integrated piece. The inflatable seal10is hollow and can be inflated by providing pressurized fluid to the interior of the inflatable seal10via an inlet11a. The fluid of the inflatable seal10may be expelled via an outlet11b. Such an embodiment is illustrated inFIG.2A. The inlet11aand the outlet11bcomprise small tubes that are rigidly attached, or integrated in, to the inflatable seal10, and fluidly communicate with the interior of the inflatable seal10. Alternatively, a common inlet/outlet11cis provided via the same tube, as illustrated inFIG.2B. The inflatable seal10has a circular shape, for example the shape of a hollow torus. In one embodiment, the inflatable seal10has the shape of an inflatable tube, for example similar to an inner tube of a bike wheel. FIG.3illustrates a robot joint20, for example one of the robot joints20a-20fofFIGS.1A-1B, from the exterior of the joint20. As mentioned, the robot joint20comprises a first part22and a second part24arranged to have a relative movement in between. Thus, the first part22and the second part24are movably arranged in relation to each other, and thereby allow a relative movement between them. The output shaft23of the axis comprising the joint20is schematically illustrated in the figure with the dotted lines. The output shaft23thus connects the first part22and the second part24of the joint. The robot joint also comprises a joint gap26(FIG.4) spacing the first part22and the second part24from each other. The robot joint20comprises an inflatable seal10accommodated in the joint gap26, to provide a fluid-tight sealing of the joint20. InFIG.3, the inflatable seal10is illustrated in cross-section. FIG.4illustrates an enlarged detail ofFIG.3, namely a cross-section of the inflatable seal10. The first part22comprises a groove delimiting the inflatable seal10from three sides such as to force the inflatable seal10to expand in a direction towards the second part24when inflating the inflatable seal10, and thereby closing the joint gap26. In more detail, the first part22may comprise a first axially extending structure22a, or wall part, delimiting the inflatable seal10against the exterior of the robot joint20. The first part22may also comprise a second axially extending structure22b, or wall part, that delimits the inflatable seal10against the interior of the robot joint20. In one embodiment, to make sure the inflatable seal10does not change its position in the housing, the inflatable seal10is assembled or attached to the first part22, for example by mechanical retaining and/or by gluing. The inflatable seal10is arranged to receive pneumatic supply through the inlet11a(FIG.2), to change its profile. The contact pressure between the inflatable seal10and the inner face24aof the second part24can be adjusted by adjusting the inner pressure of the inflatable seal10, to adapt to different operating modes of the joint. During a wash-down, it may require a contact pressure as high as possible to secure the sealing capability against water jet, while during a regular operation mode, a lower contact pressure is needed to seal off casual external contaminations. The inner pressure for providing a low contact pressure, thus a contact pressure required to seal off external ingressions, may be set to an atmospheric pressure. This mode is also referred to as the first mode, and the inner pressure is referred to as a predetermined low pressure. The inner pressure for providing a high contact pressure should be high enough to withstand impact force from the wash down. This mode is also referred to as the second mode, and the inner pressure is referred to as a predetermined high pressure. It should be emphasized that during any mode, the inflatable seal10is securely sealing the robot joint. The contact pressure is the pressure of the inflatable seal10against the inner face24a, or sealing face, of the second part24. As illustrated inFIG.4, the first part22comprises a channel25, in which a first tube27ais provided and attached to the inlet11afor inflating the inflatable seal10, and a second tube27bis provided and attached to the outlet tube11bfor deflating the inflatable seal10. This embodiment corresponds to the inflatable seal10illustrated inFIG.2A, and the seals10a-10finFIG.8. Instead, the channel25may comprise only one tube27attached to a common inlet/outlet11cof the seal10, corresponding to the inflatable seal10illustrated inFIG.2Band the seals10a-10finFIG.9. It is to be expected that dirt and/or bacteria will contaminate not only the part of the joint gap26delimited by the first part22, the second part24and the inflatable seal10and being open towards the exterior of the robot joint20, but also small distances within the interfaces between the inflatable seal10and the first part22and/or the second part24. That is, dirt and/or bacteria is expected to intrude between the inflatable seal10and the first part22and/or the second part24. It is furthermore expected that the interfaces between the inflatable seal10and the first part22and/or the second part24are particularly challenging to be properly cleaned during a wash-down. If the inflatable seal10consists of a homogenous material, inflating the same causes the inflatable seal10to be pressed even stronger against the first part22and the second part24at the region towards the exterior of the robot joint20, and thereby further counteracts the cleaning of the respective interfaces. Referring toFIG.5, in order to mitigate the aforementioned issue, according to one embodiment of the inflatable seal10the same is designed to strongly change its shape at the region open towards the exterior of the robot joint20. In order to achieve this, the outer portion of the inflatable seal10in radial direction is provided with an enforcement28in the form of a profile or profiles made of spring steel. The enforcement28is stiff in relation to the surrounding relatively flexible material, and thereby it provides the inflatable seal10with a non-homogenous structure. The relatively flexible material in effect functions as a spring29allowing the interior of the inflatable seal10to expand towards the surrounding walls, but at the same time counteracting the force exerted by the pressurized air, as schematically illustrated inFIG.5. The enforcement28has a larger area exposed to pressurized air on the side of the first part22compared to that on the side of the second part24, which causes the enforcement28to move towards the first part22at inflation of the inflatable seal10. As the remainder of the inflatable seal10consists of relatively flexible material, this movement in its turn causes the inflatable seal10to strongly change its shape at the region open towards the exterior of the robot joint20such as to effectively expose the respective interfaces for cleaning. In the following processes of pressurizing the one or several inflatable seals10of the robot100will be described. FIGS.6and7illustrate a system70comprising a robot100as described above, with a plurality of axes and joints, and inflatable seals10a-sealing the joint gaps of the joints.FIG.6illustrates the whole system70, whereasFIG.7illustrates the robot100in another view to show axis5and6that are not visible inFIG.6, but for simplicity without all parts of the system70. The system70also comprises a control unit80, a valve arrangement60and a tube arrangement90fluidly connected to the source61of pressurized fluid and to the inflatable seals10a-10f. For working applications, the robot100may be in need of pressurized fluid, and normally the robot100is already located in connection to a source61of pressurized fluid. InFIG.6, this source61of pressurized fluid is depicted as a box, but it should be understood that the source may include a container with pressurized fluid, a compressor for pressurizing the fluid etc. InFIGS.6and7, the plurality of inflatable seals10a-10fare fluidly connected in parallel as also is illustrated inFIG.9. The same fluid tube27in the robot100is then used for deflation and inflation of the inflatable seals10a-10f, and the fluid tube27is being passed through hollow spaces of the robot100, for example through hollow shafts52,53,54and56and inside an enclosure100aof the robot100, to fluidly connect to all seals10a-10f. In one embodiment, and in operation, the pressurized fluid is guided in the system70from a source61via a fluid line27inside the robot100to the furthest away located inflatable seal10f, that is here sealing axis six. The valve arrangement comprises a three-position valve67and a first valve64. The tube arrangement90comprises a first fluid line91, a second fluid line92, a third fluid line93and a fourth fluid line94. The first fluid line91is connected between the source61and the three-position valve67. The second fluid line92is connected between the three-position valve67and the fluid tube27. The third fluid line93is connected between the three-position valve67and an outlet66. The fluid may be passed out from the system70to the outlet66for recycling the pressurized fluid, here schematically illustrated as a box. The fourth fluid line94connects the second fluid line92to the atmosphere. The fourth fluid line94is fitted with the first valve64and a manometer63. The default position of the valve67is to keep all flow terminals of the valve closed, which here is the middle position of the valve67, also referred to as a closed state. Before operating the robot100, a valve coil67bis first energized, which ensures that all joint seals10a-10fare deflated, so all joint seals10a-10fkeep minimum required contact force against contact surfaces. This corresponds to the right-hand side position of the valve67, whereby the air in the joint seals10a-10fis passed to the outlet66for recycling the pressurized fluid, whereby the pressure in the joint seals10a-10fwill correspond to atmospheric pressure. Before wash down of the robot100, valve coil67ais first energized and hold in its energized position, which allows pressurized gas to be passed into the joint seals10a-10fto inflate the same. When the pressure inside the second fluid line92and thus also the fourth fluid line94reaches a desired pressure value, the manometer63will indicate this to the control unit80, which triggers a signal to be sent that de-energizes the valve coil67a, whereby the valve67returns to its default position and keeps the pressure in the joint seals10a-10fstable. A minimum threshold and/or maximum threshold can be set in the control unit80, such that in case of pressure change, the air pressure in the joint seals10a-10fcan be regulated back to the desired value. The valve arrangement60and the tube arrangement90may comprise a filter68and an orifice69arranged to the third fluid line93. The valves herein are for example hydraulically, pneumatically, or electrically controlled valves, that comprises springs and operators to change the state of the valves and thus the direction of the flow. The control unit80is schematically illustrated in theFIG.6with a box, and it is understood that the control unit80is connected by wire or wirelessly to the valve67and to first valve64, and in some embodiments to the manometer63. The control unit80is programmed to pressurize the inflatable seals10a-10fby means of the valve arrangement60and the tube arrangement90. By pressurizing the seals10a-10f, the inflatable seals10a-10fcan be made to expand in an axial direction towards the second part24of each robot joint, to seal the joint gap between the first part22and the second part24of each robot joint with a high contact pressure. The control unit80is also programmed to de-pressurize the inflatable seals10a-10fby means of the valve arrangement60and the tube arrangement90such that the inflatable seals10a-10fcontract in an axial direction towards the first part22of each robot joint, to lower the contact pressure from the inflatable seal on each robot joint. The control unit80is configured to control the valve arrangement60to pressurize the inflatable seals10a-10fin synchronization with the robot operation. For example, the control unit80is arranged to provide power to the valve arrangement60in synchronization with the robot operation, thus, when the robot100is operating, the valve arrangement60is also powered, and when the robot100is not operating or is not powered, the valve arrangement60is also not powered. In one embodiment, the control unit80is configured to control the valve arrangement60to pressurize the inflatable seals10a-10f, such that the inflatable seals10a-10fare pressurized to a predetermined low pressure when the robot100is working, and to a predetermined high pressure when the robot100is exposed to high pressure wash down and/or is shut down. The control unit80may for that purpose monitor the operation of the robot100to understand when the robot is exposed to wash-down, is operating or not operating etc. The control unit80may for example receive one or several signals from the robot100indicating the status of the same, that is, if the robot100is exposed to wash-down, whether it is operating or not, and whether it is powered. This functionality may alternatively be incorporated with the powering to the robot100, and thus, when the robot100is powered, the valves64,67are also powered (to inflate or deflate seals10a-10f), and when the robot100is not powered, the valves64,67are also not powered (to maintain the pressure in the seals10a-10f). More in detail, the control unit80comprises a processor and a memory. The control unit80is for example an external computer, or a robot controller of the robot100. The memory may include a computer program, wherein the computer program comprises a computer program code to cause the control unit80, or a computer connected to the control unit80, to perform the method as will be described in the following. The program may be stored on a computer-readable medium, such as a memory stick or a CD ROOM. A computer program product may comprise a computer program code stored on such a computer-readable medium to perform the method as described herein, when the computer program code is executed by the control unit80or by a computer connected to the control unit80. FIG.8illustrates the plurality of inflatable seals10a-10f(in isolation) while connected in series by means of the tube arrangement90, according to one embodiment. The inflatable seals have different diameters to fit in the joint gap of the corresponding joint. The fourth inflatable seal10dand the fifth inflatable seal10ehave been omitted for simplicity. The tube arrangement90includes the tubes27a-27f. Each tube in the tube arrangement (except the first tube27aand the last tube27f) connects an inflatable seal with a next closest inflatable seal. For example, the tube27econnects the second inflatable seal10bwith the first inflatable seal10a, and the tube27dconnects the second inflatable seal10bwith the third inflatable seal10c. FIG.9illustrates the plurality of inflatable seals10a-10f(in isolation) while connected in parallel by means of the tube arrangement90including the tube27, according to another embodiment. Thus, the tube27connects all the plurality of inflatable seals10a-10fin parallel. In the following a corresponding method for sealing a joint gap26of a robot joint20will be illustrated with reference to the flow chart ofFIG.10. It should be understood that the method may be used for sealing joint gaps26of a plurality of joints of the robot100by means of a plurality of inflatable joints10, but the method is here for simplicity explained with reference to sealing only one robot joint with an inflatable seal. The method comprises pressurizing S1the inflatable seal10such that the inflatable seal expands in an axial direction towards the second part24to seal the joint gap26between the first part22and the second part24. Thus, the inflatable seal10can be pressurized to provide a variable sealing of the robot joint. In one embodiment, the method comprises pressurizing S1the inflatable seal10in synchronization with the robot operation. For example, the method comprises pressurizing S11the inflatable seal10to a predetermined low pressure, upon receiving an indication that the robot100is working. The method may also include pressurizing S12the inflatable seal10to a predetermined high pressure upon receiving an indication that the robot100is not working and/or is exposed to wash down. The predetermined high pressure is higher that the predetermined low pressure. The low pressure is for example atmospheric pressure. The present invention is not limited to the above-described preferred embodiments. Various alternatives, modifications and equivalents may be used. For example, while the disclosure refers to an embodiment where the relative movement between the first part22and a second part24occurs between surfaces that are axial or at least have axial direction components, the invention can also be applied in joints between two radial surfaces, such as between a shaft and a shaft passage. Moreover, the relative movement is not limited to a rotational movement but can also contain or consist of a linear movement. Therefore, the above embodiments should not be taken as limiting the scope of the invention, which is defined by the appending claims. | 22,201 |
11858127 | DETAILED DESCRIPTION The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. FIGS.1A-1Dillustrate a system300for performing robotically-assisted image-guided surgery according to various embodiments.FIG.1Ais a front perspective view of the system300andFIG.1Bis a rear perspective view of the system300.FIG.1Cis a top view of the system300andFIG.1Dis a front elevation view of the system300. The system300in this embodiment includes a robotic arm301, an imaging device303and a motion tracking system305. The robotic arm301may comprise a multi-joint arm that includes a plurality of linkages connected by joints having actuator(s) and optional encoder(s) to enable the linkages to bend, rotate and/or translate relative to one another in response to control signals from a robot control system. The robotic arm301may be fixed to a support member350at one end and may have an end effector302at the other end of the robotic arm301. The end effector302may be most clearly seen inFIGS.1B and3. Although a single robotic arm301is shown inFIGS.1A-1D, it will be understood that the system300may include multiple robotic arms attached to suitable support structure(s). The robotic arm301may aid in the performance of a surgical procedure, such as a minimally-invasive spinal surgical procedure or various other types of orthopedic, neurological, cardiothoracic and general surgical procedures. In the embodiment ofFIGS.1A-1B, the robotic arm301may be used to assist a surgeon performing a surgical procedure in a cervical spinal region of a patient. The robotic arm301may also be used for thoracic and/or lumbar spinal procedures. The procedures may be performed posteriorly, anteriorly or laterally. In embodiments, the robotic arm301may be controlled to move the end effector302to one or more pre-determined positions and/or orientations with respect to a patient200. In some embodiments, the end effector302may be or may have attached to it an invasive surgical tool, such as a needle, a cannula, a dilator, a cutting or gripping instrument, a drill, a screw, an electrode, an endoscope, an implant, a radiation source, a drug, etc., that may be inserted into the body of the patient. In other embodiments, the end effector302may be a hollow tube or cannula that may receive an invasive surgical tool100(seeFIG.1B), including without limitation a needle, a cannula, a tool for gripping or cutting, an electrode, an implant, a radiation source, a drug and an endoscope. The invasive surgical tool100may be inserted into the patient's body through the hollow tube or cannula by a surgeon. The robotic arm301may be controlled to maintain the position and orientation of the end effector302with respect to the patient200to ensure that the surgical tool(s)100follow a desired trajectory through the patient's body to reach a target area. The target area may be previously-determined during a surgical planning process based on patient images, which may be obtained using the imaging device303. The imaging device303may be used to obtain diagnostic images of a patient200, which may be a human or animal patient. In embodiments, the imaging device303may be an x-ray computed tomography (CT) imaging device. The patient200may be positioned within a central bore307of the imaging device303and an x-ray source and detector may be rotated around the bore307to obtain x-ray image data (e.g., raw x-ray projection data) of the patient200. The collected image data may be processed using a suitable processor (e.g., computer) to perform a three-dimensional reconstruction of the object. In other embodiments, the imaging device303may comprise one or more of an x-ray fluoroscopic imaging device, a magnetic resonance (MR) imaging device, a positron emission tomography (PET) imaging device, a single-photon emission computed tomography (SPECT), or an ultrasound imaging device. In embodiments, image data may be obtained pre-operatively (i.e., prior to performing a surgical procedure) or intra-operatively (i.e., during a surgical procedure) by positioning the patient200within the bore307of the imaging device303. In the system300ofFIGS.1A-1D, this may be accomplished by moving the imaging device303over the patient200to perform a scan while the patient200may remain stationary. The imaging device303may also be used to validate a surgical intervention, such as by determining that an invasive tool, instrument and/or implant has been placed in the proper location in the patient's body. Examples of x-ray CT imaging devices that may be used according to various embodiments are described in, for example, U.S. Pat. No. 8,118,488, U.S. Patent Application Publication No. 2014/0139215, U.S. Patent Application Publication No. 2014/0003572, U.S. Patent Application Publication No. 2014/0265182 and U.S. Patent Application Publication No. 2014/0275953, the entire contents of all of which are incorporated herein by reference. In the embodiment shown inFIGS.1A-1D, the patient support60(e.g., surgical table) upon which the patient200may be located is secured to the imaging device303, such as via a column50which is mounted to a base20of the imaging device303. In the embodiment ofFIGS.1A-1D, the patient200is supported on a patient table60that is rotated away from the bore307of the imaging device303. During an imaging scan, the patient support60may be rotated in-line with the bore307such that the patient axis is aligned with the imaging axis of the imaging device303. A portion of the imaging device303(e.g., an O-shaped imaging gantry40) which includes at least one imaging component may translate along the length of the base20on rails23to perform an imaging scan of the patient200, and may translate away from the patient200to an out-of-the-way positon for performing a surgical procedure on the patient200. An example imaging device303that may be used in various embodiments is the AIRO® intra-operative CT system manufactured by Mobius Imaging, LLC and distributed by Brainlab, AG. Other imaging devices may also be utilized. For example, the imaging device303may be a mobile CT device that is not attached to the patient support60and may be wheeled or otherwise moved over the patient200and the support60to perform a scan. Examples of mobile CT devices include the BodyTom® CT scanner from Samsung Electronics Co., Ltd. and the O-arm® surgical imaging system form Medtronic, plc. The imaging device303may also be a C-arm x-ray fluoroscopy device. In other embodiments, the imaging device303may be a fixed-bore imaging device, and the patient200may be moved into the bore of the device, either on a surgical support60as shown inFIGS.1A-1D, or on a separate patient table that is configured to slide in and out of the bore. The motion tracking system305in this embodiment includes a plurality of marker devices119,202and315and a stereoscopic optical sensor device311that includes two or more cameras (e.g., IR cameras). The optical sensor device311may include one or more radiation sources (e.g., diode ring(s)) that direct radiation (e.g., IR radiation) into the surgical field, where the radiation may be reflected by the marker devices119,202and315and received by the cameras. A computer313may be coupled to the sensor device311as schematically illustrated inFIG.1Dand may determine the positions and orientations of the marker devices119,202,315detected by the cameras using, for example, triangulation techniques. A3D model of the surgical space may be generated and continually updated using motion tracking software implemented by the computer313. In embodiments, the computer313may also receive image data from the imaging device303and may register the image data to a common coordinate system with the motion tracking system305using image registration techniques as are known in the art. In embodiments, a reference marker device315(e.g., reference arc) may be rigidly attached to a landmark in the anatomical region of interest (e.g., clamped or otherwise attached to the spinous process of a patient's vertebrae) to enable the anatomical region of interest to be continually tracked by the motion tracking system305. Another marker device202may be rigidly attached to the robotic arm301, such as on the end effector302of the robotic arm301, to enable the position of robotic arm301and end effector302to be tracked using the motion tracking system305. The computer313may include software configured to perform a transform between the joint coordinates of the robotic arm301and the common coordinate system of the motion tracking system305, which may enable the position and orientation of the end effector302of the robotic arm301to be controlled with respect to the patient200. In addition to passive marker devices described above, the motion tracking system305may alternately utilize active marker devices that may include radiation emitters (e.g., LEDs) that may emit radiation that is detected by an optical sensor device311. Each active marker device or sets of active marker devices attached to a particular object may emit radiation in a pre-determined strobe pattern (e.g., with modulated pulse width, pulse rate, time slot and/or amplitude) and/or wavelength which may enable different objects to be uniquely identified and tracked by the motion tracking system305. One or more active marker devices may be fixed relative to the patient, such as on a reference marker device as described above or secured to the patient's skin via an adhesive membrane or mask. Additional active marker devices may be fixed to surgical tools100and/or to the end effector302of the robotic arm301to allow these objects to be tracked relative to the patient. In further embodiments, the marker devices may be passive maker devices that include moiré patterns that may enable their position and orientation to be tracked in three-dimensional space using a single camera using Moiré Phase Tracking (MPT) technology. Other tracking technologies, such as computer vision systems and/or magnetic-based tracking systems, may also be utilized. The system300may also include a display device319as schematically illustrated inFIG.1D. The display device319may display image data of the patient's anatomy obtained by the imaging device303. The display device319may facilitate planning for a surgical procedure, such as by enabling a surgeon to define one or more target positions in the patient's body and/or a path or trajectory into the patient's body for inserting surgical tool(s) to reach a target position while minimizing damage to other tissue or organs of the patient. The position and/or orientation of one or more objects tracked by the motion tracking system305may be shown on the display319, and may be shown superimposed with the image data. For example, the position and/or orientation of an implantable surgical tool100with respect to the patient's anatomy may be graphically depicted on the display319based on the tracked position/orientation of the marker device119fixed to the tool100and the known geometry of the tool100, which may be pre-registered with the motion tracking system305. The display319may also include graphical depictions of other objects, such as implants (e.g., pedicle screws). In various embodiments, the imaging device303may be located close to the surgical area of the patient200which may enable pre-operative, intra-operative and post-operative imaging of the patient200, preferably without needing to remove the patient from the operating theater or transitioning the patient from the surgical table60. In embodiments, the imaging device303may be located less than about 5 meters, such as less than about 2 meters (e.g., less than about 1 meter) from the surgical area of the patient200during a surgical procedure. As shown, for example, inFIGS.1A-1D, the imaging device303(e.g., X-ray CT scanner) may include an imaging gantry40that may be moved (i.e., translated) over the surgical area of patient200to perform an imaging scan and may be moved (i.e., translated) away from the surgical area of the patient200so as not to interfere with a surgeon performing a surgical procedure. In the embodiment shown inFIG.1, the imaging gantry40may be supported by a gimbal support30, which may include a pair of arms31,33extending upwards from the base20which may each attach to opposite sides of the gantry40. The gimbal30and the gantry40may translate together along the length of the base20to perform an imaging scan. In some embodiments, the gantry40may be attached to the arms31,33of the gimbal30by rotary bearings which may enable the gantry40to tilt with respect to the gimbal30and the base20to obtain patient images at an oblique angle. In various embodiments, the robotic arm301may be attached to a support structure that is also located close to the surgical area of the patient200. For example, the base end304of the robotic arm301(i.e., the end of the arm301opposite the end effector302) may be fixed to a support structure at a position that is less than about 2 meters, such as less than about 1 meter (e.g., between 0.5 and 1 meter) from the surgical area of the patient200during a surgical procedure. In a conventional robotically-assisted surgical system, a robotic arm may be mounted to a mobile cart that may be moved proximate to the surgical area of the patient200, typically approaching the surgical table60from a side of the table60. The cart may remain fixed in place adjacent to the surgical table60while a robotic arm may extend from the cart into the surgical area during a surgical procedure. Alternately, the mobile cart may be used primarily for transport of the robotic arm to and from a position proximate to the surgical area. During surgery, the robotic arm may be attached to another support structure, such as a surgical side rail of the patient table60, and the cart may be moved out of the way. In either case, the robotic arm and/or cart may occupy a relatively large amount of space in the surgical area. For example, the robotic arm may take up space that would otherwise by occupied by a surgeon or other clinician during the surgical procedure, which may impede workflow. In addition, the robotic arm and/or cart will often be positioned so as to impede imaging of the patient by an imaging device303. For example, a robotic arm and/or cart positioned along a side of the patient table60may not fit within the bore307of the gantry40of the imaging device303, and may need to be removed prior to imaging of the patient200. In the embodiment ofFIGS.1A-1D, the robotic arm303may be mounted to a mobile shuttle330having a support member350for a robotic arm301that may extend at least partially over an outer surface (e.g., circumference) of the gantry40of the imaging system303when the shuttle330is moved adjacent to the imaging system303. In the embodiment ofFIG.1, the support member350comprises a curved rail that extends around the outer circumference of the gantry40. The support member350may extend around at least about 25%, such as between about 30-50% of the outer circumference of the gantry40. The support member350may extend around at least a portion of the outer circumference of the gantry40that is located above the surgical area of the patient200. In the embodiment ofFIG.1, the support member350forms a semicircular arc that extends between a first end351, which is located proximate to the end of a first arm31of the gimbal30, and a second end353, which is located proximate to the end of a second arm33of the gimbal30when the shuttle330is positioned adjacent to the imaging system303. The semicircular arc support member350may be concentric with the outer circumference of the gantry40. In embodiments, the support member350may extend along a semicircular arc having a radius that is greater than about 33 inches, such as greater than about 35 inches (e.g., between 33 and 50 inches). The support member350may be spaced from the outer surface of the gantry40by a pre-determined distance, which may be from less than an inch (e.g., 0.5 inches) to 6 or 10 inches or more. In some embodiments, the support member350may be spaced from the gantry40by an amount sufficient to enable the tilt motion of the gantry40with respect to the gimbal30over at least a limited range of motion. In addition to a curved support member350, in some embodiments the support member350may comprise one or more straight segments (e.g., rail segments), where at least a portion of the support member350may extend over the top surface of the gantry40. A carriage360may be located on the support member350and may include a mounting surface361for mounting the base end304of the robotic arm301to the carriage360. As shown inFIG.1A-1D, the carriage360may extend from the support member350towards a first (e.g., front) face of the gantry40. The mounting surface361for the robotic arm301may extend beyond the first (e.g., front) face of the gantry40and the robotic arm301may extend over the first (e.g., front) face of the gantry40. In some embodiments, the configuration of the carriage360and mounting surface361may be reversed such that the mounting surface361extends beyond the second (e.g., rear) face of the gantry40, and the robotic arm301may extend over the second (e.g., rear) face of the gantry40. In this configuration, the patient support60may be configured such that the patient support60and patient200extend into or through the bore307of the gantry40, and a portion of the patient200requiring surgical intervention (e.g., the cranium) may be accessed from the second (e.g., rear) side of the gantry40. In some embodiments, the carriage360and the robotic arm301attached thereto may be moved to different positions along the length of support member350(e.g., any arbitrary position between the first end351and the second end353of the support member360). The carriage360and the robotic arm301may be fixed in place at a particular desired position along the length of the support member350. In some embodiments, the carriage360may be moved manually (e.g., positioned by an operator at a particular location along the length of the support member350and then clamped or otherwise fastened in place). Alternately, the carriage360may be driven to different positions using a suitable drive mechanism (e.g., a motorized belt drive, friction wheel, gear tooth assembly, cable-pulley system, etc., not shown inFIGS.1A-1D). The drive mechanism may be located on the carriage360and/or the support member350, for example. An encoder mechanism may be utilized to indicate the position of the carriage360and the base end304of the robotic arm301on the support member350. Although the embodiment ofFIGS.1A-1Dillustrate one robotic arm301mounted to the support member350, it will be understood that more than one robotic arm may be mounted to the support member350via respective carriages360. Further, in some embodiments, the robotic arm301may be mounted directly to the support member350, such as on a mounting surface361that is integrally formed on the support member350. In such an embodiment, the position of robotic arm301may not be movable along the length of the support member350. In some embodiments, there may be sufficient clearance between the support member350and/or carriage360and the outer circumference of the gantry40to enable the shuttle330with the robotic arm301attached to approach the imaging system303from the second (e.g., rear) side of the imaging system303such that the robotic arm101may pass over the outer circumference of the gantry40and then extend over the front side of the gantry40in a configuration such as shown inFIGS.1A-1D. In embodiments, the robotic arm301may be in a first pose in order to reduce its profile in the radial direction as it passes over the gantry40and may then be extended in a direction towards the patient200as shown inFIGS.1A-1D. Alternately or in addition, the carriage360may be hinged to enable the mounting surface361to be pivoted upwards to provide additional clearance for the robotic arm301to pass over the gantry40when the mobile shuttle330is positioned adjacent to the imaging system303and may then be pivoted downward to the configuration shown inFIGS.1A-1D. In some embodiments, the height of the support member350may be temporarily raised, such as via a jack mechanism on the mobile shuttle330, to allow the robotic arm301to pass over the gantry40, and may then be lowered to the configuration shown inFIGS.1A-1D. In further embodiments, the support member350may be moved over the gantry40without the robotic arm301or the carriage360mounted to the support member350, and the robotic arm301or the carriage360may be mounted to the support member350after the mobile shuttle330is moved into position adjacent to the imaging system303. In some embodiments, the robotic arm303may be mounted to the carriage360via an adaptor, which may be a quick-connect/disconnect adaptor. In some embodiments, the robotic arm301may be mounted to a mounting surface361that is located on a top surface of the carriage360to enable the robotic arm301to pass over the gantry40, such as shown in the embodiment ofFIGS.4A-4D, described below. The mobile shuttle330further includes a base401having a plurality of wheels403attached to the base401that enable the mobile shuttle330to be moved over a surface (e.g., a floor). In the embodiment ofFIGS.1A-1D, the base401includes two sets of wheels403, including a first set of wheels403alocated proximate to a first end402the mobile shuttle330and a second set of wheels403blocated proximate to a second end404of the mobile shuttle330. The wheels403may be positioned and distributed to provide balance and stability to the mobile shuttle330and may enable the shuttle to be moved, with or without one or more robotic arms301attached, without tipping over. In the embodiment ofFIGS.1A-1D, each of the wheels403of the mobile shuttle are located in a caster assembly, which may be a swivel-type caster assembly to provide increased maneuverability of the shuttle330. However, it will be understood that other configurations for the wheels403may be utilized. In some embodiments, at least a portion of the wheels403may be geared into a drive mechanism for propelling the mobile shuttle330over a surface. The base401of the mobile shuttle330may include two parallel rails405,407, where each of the wheels403may be mounted to a rail405,407. The rails405,407may be separated from each other by a distance that is greater than a width of the base20of the imaging system303. In one embodiment, the rails405,407are separated by at least about 22 inches. When the mobile shuttle330is moved adjacent to the imaging system303as shown inFIGS.1A-1D, the rails405,407may extend at least partially along opposing sides of the base20of the imaging system303. The rails405,407may have a top surface that is less than a foot from the floor, and preferably less than about 8 inches from the floor. As shown inFIGS.1A-1D, the height of the rails405,407may enable the rails405,407to move under and fit beneath a portion of the imaging system303, such as the arms31,33of the gimbal30of the imaging system303. A connecting member409which may extend generally transverse to the rails405,407may connect the rails405,407to each other. The connecting member409may be located closer to the second end404of the shuttle330than to the first end402, as shown inFIG.1B. In embodiments, the connecting member409may extend upwards from each of the rails405,407to form a bridge portion411, as shown inFIG.1B. The bridge portion411may have sufficient clearance to extend over the base20of the imaging system303, as shown inFIG.1B. In embodiments, the bridge portion411may have a clearance height of at least about 7 inches (e.g., 8-12 inches). The height of the bridge portion411may be such that it does not interfere with the tilt motion of the gantry40. At least one arm413,415may extend upwards from the base401of the mobile shuttle330. As shown inFIG.1B, a pair of arms413,415may extend from respective rails405,407of the base401. In other embodiments, at least one arm413,415may extend from the connecting member409. Each of the arms413,415may have a shape that substantially corresponds to the shape of the respective arms31,33of the gimbal30which supports the gantry40of the imaging system303. When the mobile shuttle330is moved adjacent to the imaging system303as shown inFIGS.1A-1D, the arms413,415may extend adjacent to the arms31,33of the gimbal30. The arms413,415may have a curved profile over at least a portion of their length, where the shape of the curve may substantially correspond to the shape of the outer circumference of the O-shaped gantry40. The arms413,415may be located radially-outwards from the outer circumference of the gantry40, which may enable the gantry40to tilt on the gimbal30. In some embodiments, a width of the mobile shuttle330defined between the outer surfaces of the arms413,415may be less than a width of the imaging system303, which may be defined by external surfaces of the arms31,33of the gimbal30. Thus, when the mobile shuttle330is positioned adjacent to the imaging system303, the arms413,415of the mobile shuttle330may be completely hidden behind the gimbal30when the system300is viewed head-on, as illustrated inFIG.1D. In some embodiments, a reinforcing member417may extend between the arms413,415and may also be connected to the bridge portion411, as shown inFIG.1B. The reinforcing member417may have a curved shape that may conform to the shape of the outer circumference of the O-shaped gantry40. The reinforcing member417may be offset from the rear face of the gantry40as shown inFIG.4B. The reinforcing member417may extend radially-outwards from the outer circumference of the gantry40to enable the gantry40to tilt on the gimbal30. The arms413,413, the connecting member409and optional reinforcing member417preferably do not interfere with any cables or fluid lines extending through the bore307of the gantry40(e.g., as may be required by an anesthesiologist) and may have a relatively small profile in the lateral direction (i.e., parallel to the imaging axis of the gantry40, or in the z-axis direction), such as less than about 10 inches in lateral width (e.g., less than 8 inches in lateral width, including less than about 6 inches in lateral width). This may enable a patient200extending into or through the bore307of the gantry40to be easily accessed from the rear side of the gantry40without interference from the mobile shuttle330. The at least one arm413,415extending from the base401of the mobile shuttle330may be off-set from the support member350upon which the at least one robotic arm301is mounted. As shown inFIGS.1B and1C, for example, the arms413,415may be located adjacent to a face (e.g., rear face) of the gantry40and gimbal30and the support member330may extend above the arms31,33of the gimbal30and over the outer circumference of the gantry40. A lateral connector portion419may extend in a lateral direction (i.e., parallel to the imaging axis of the gantry40, or in the z-axis direction) between each of the arms413,415and the respective first and second ends351,353of the support member350. The lateral connector portion419may be a separate structure that is secured (e.g., bolted or welded) between the end of an arm413,415and the respective end351,353of the support member350, as shown inFIGS.1A-1D. Alternately, the support member350and arm(s)413,415may be formed as a unitary structure having a bent or curved segment forming the lateral connector portion419. As shown inFIGS.1A-1D, the lateral connector portions419may have a curved profile that corresponds with an outer surface of the arms31,33of the gimbal30. Thus, when the mobile shuttle330is positioned adjacent to the imaging device303, the ends of each of the arms31,33of the gimbal30may be nested beneath the connector portions419. FIGS.1A-1Dschematically illustrate a support arm421for an optical sensor device311(e.g., multi-camera array) of a motion tracking system305mounted to the mobile shuttle330. The support arm421may be mounted to the support member350. In some embodiments, the support arm421may be a telescoping arm in order to adjust the length of the support arm421. Alternately, the support arm421may have a fixed length. The support arm421may also rotate or pivot on a first joint423to adjust the rotational position of the optical sensor device111. The support arm421may also include a second joint425(e.g., a ball joint) at the distal end of the arm421to adjust the orientation of the optical sensor device111. The support arm421may include a handle427at the distal end of the arm421to enable a user to adjust the pose of the optical sensor device111. The support arm421may include features that hold the optical sensor device111in a desired pose during a surgical procedure. There are a variety of ways in which a support arm421for an optical sensor device311may be mounted to a mobile shuttle330. In embodiments, the support arm421may be clamped or otherwise fastened onto the support member350. The support arm421may be moved to various positions along the length of the support member350and fastened in place at a desired position. In some embodiments, the support arm421may be permanently mounted to a particular position on the support member350. Alternately, the support arm421may be removably mounted (e.g., clamped onto) or non-removably mounted (e.g., bolted or welded) to the carriage360upon which the robotic arm301is mounted. In some embodiments, the support arm421may be mounted to a separate carriage that may be movable along the length of the support member350independent of the movement of the carriage360for the robotic arm301.FIG.3illustrates an embodiment of a support member350for a mobile shuttle330that include a pair of curved support rails423,425that extend parallel to one another over an outer surface (e.g., circumference) of the gantry40of an imaging system303. In this embodiment, the robotic arm301is mounted to a first moveable carriage360on a first support rail433and the support arm421for the optical sensor device311is mounted to a second moveable carriage430on the second support rail435. The two carriages360and427may be moved independently of one another, as illustrated inFIG.3. The support arm421may also be slidable within an opening428in the second carriage427to adjust the displacement of the optical sensor device311. In some embodiments, a support member350having a pair of support rails433,435for first and second carriages360,430may be directly mounted to the imaging device303(e.g., mounted to one or both arms31,33of the gimbal30) rather than to a separate mobile shuttle330. In some embodiments, the robotic arm301may be mounted to a mobile shuttle330and the support arm421for the optical sensor device311may be directly mounted to the imaging device303. In various embodiments, a mobile shuttle330as shown inFIGS.1A-1Dmay be transported via the wheels403to a position adjacent to the imaging device303such that the support member350for the at least one robotic arm301extends at least partially over the outer surface of the gantry40. The arms413,415and/or the connector portions419may be used by an operator to steer and maneuver the mobile shuttle330during transport. In some embodiments, the mobile shuttle330may be fixed in place when it is moved to a desired position. For example, the wheels403may be locked or may be retracted relative to the base401to lower the mobile shuttle330to the floor. In some embodiments, stabilizer elements may project from or may be extended down from the base to fix the position of the shuttle300on the floor. Alternately, the shuttle330may remain moveable with respect to the floor. In some embodiments, an attachment mechanism439(schematically illustrated inFIG.1B) may be utilized to physically couple the mobile shuttle330to the imaging system303. In the example shown inFIG.1B, the attachment mechanism439is located on the arms413,413of the mobile shuttle330and couples the arms413,413to the arms31,33of the gimbal30on the imaging system303. However, one or more attachment mechanism439may be located on any portion of the mobile shuttle330(e.g., the rails405,407, connecting member409, lateral connector portion419or support member350) for coupling the shuttle300to a portion of the imaging system303. In general, the attachment mechanism439may couple the mobile shuttle330to a portion of the imaging system303that moves relative to the patient200during an imaging scan, such as the gantry40or the gimbal30. This may enable the mobile shuttle330and the robotic arm301to move with the gantry40and gimbal30during an imaging scan. The mobile shuttle330may also move with the entire imaging system303when the system303is transported. The attachment mechanism439may be any suitable mechanism for physically coupling the mobile shuttle330to a portion of the imaging system303, such as a clamp, a latch, a strap that can be secured around a portion of the imaging system303or a pair of mechanical stops that “capture” a portion of the imaging system303to enable bi-directional translation of the mobile shuttle330in coordination with the translation of at least a portion of the imaging system303relative to the patient200. In some embodiments, the mobile shuttle330may include hinged or telescoping features that may enable a user to adjust the size of the shuttle330or the position of the support member350which may allow the shuttle330to fit over different imaging devices, or to reduce the size of the shuttle350for transport. In some embodiments, the mobile shuttle330may include a cable management system for routing cables to and from the at least one robotic arm301and/or the optical sensor device311. In embodiments, one or more electrical connections for power and/or data for the at least one robotic arm301and/or the optical sensor device311may be located on or within the mobile shuttle330and may be routed to a single external connector or set of connectors on the shuttle330. FIG.2is a rear perspective view of an alternative embodiment of a mobile shuttle330positioned adjacent to an imaging device303. The patient support60in this embodiment is rotated in-line with the base20and extends partially into the bore307of the gantry40. The mobile shuttle330in this embodiment is shown without a robotic arm mounted to the shuttle330. In this embodiment, the support member350, lateral connector portion419and arms413,415are shown as a unitary structure. The wheels413in this embodiment include a set of casters403aat a forward position on the base401, and fixed wheels403bat the rear of the base401. FIGS.4A-4Dillustrate an alternative embodiment of a mobile shuttle330positioned adjacent to an imaging device303.FIG.4Ais a top perspective view of the mobile shuttle330and the imaging device303with the patient support60removed from the column50.FIGS.4B-4Care perspective views showing the patient support60attached to the column50and rotated in-line with the bore307of the gantry40. In this embodiment, a robotic arm301is mounted to a first moveable carriage360on a support member350(e.g., a curved rail) of the mobile shuttle330, and an optical sensor device311for a motion tracking system is mounted to a second moveable carriage430that is also located on the support member350(e.g., curved rail). In this embodiment, the first and second carriages360,430may slide independently over the same support member350to adjust their positions relative to the patient and to one another. In addition, as is most clearly visible inFIGS.4A,4B and4C, the mounting surface361for the base end304of the robotic arm301is angled upwards from the top of the first carriage360. The robotic arm301may thus extend in a lateral direction from the mounting surface361over the top surfaces of the carriage360, support member350and gantry40and may then extend down over the front face of the gantry40as shown inFIGS.4A-4D. The mounting surface361may be at any angle with respect to the top surface of the carriage361, such as about 90° as shown inFIGS.4A-4D. As also shown inFIGS.4A-4D, a support arm421for the optical sensor device311is attached to the second carriage430. The support arm421in this embodiment has a fixed length. A first joint423, which may be a rotating ball joint, enables the support arm421to be pivoted on the second carriage430. The first joint423may have a locking mechanism to lock the joint423in place. A second joint425at the distal end of the arm421, which may also be a rotating ball joint, may enable adjustments to the orientation of the optical sensor device111. The second joint425may also have a locking mechanism to lock the joint423in place. In this embodiment, the second carriage430may be manually moved to a desired position on the support member350and a clamping mechanism may enable the second carriage430to be fixed in place. Alternately, the second carriage430may be driven on the support member350by an active drive system. Various embodiments of a mobile shuttle330may enable one or more robotic arms301to be moved to any position along a support member350, such as a curved rail. Since the base end304of the robotic arm301may be mounted above the gantry40, the robotic arm301can be easily moved out of the way of the surgical area, such as by raising the entire arm301above the patient200. When the patient table60is in a position as shown inFIGS.1A-1D, the base end304of the robotic arm301can be moved on the support member350to any position along length of patient200so that the robotic arm301may approach the patient from the side of the patient200or at an oblique angle relative to the patient axis. When the table60is rotated in-line with gantry40as shown inFIG.2, the robotic arm301can approach patient along the patient axis or in a direction that is generally parallel to the patient axis. The robotic arm301may also be moved down towards the ends351,353of the support member350, which may enable the robotic arm301to approach the patient200in a lateral direction. Various embodiments may enable a robotic arm301to easily access a patient200that is located within or extends through the bore307of the gantry40of the imaging system303. In various configurations, the robotic arm301may extend down from above the patient200, which may conserve valuable space in the surgical theater. Various embodiments of a mobile shuttle330have been described for mounting at least one robotic arm301in close proximity to an imaging device303having a generally O-shaped gantry40, where the gantry40is supported above a base20by a generally U-shaped gimbal30. However, it will be understood that a mobile shuttle330may be used for mounting one or more robotic arms301proximate to other types of imaging systems, such as an x-ray imaging system having an O-shaped imaging gantry mounted to a mobile support structure in a cantilevered fashion as well as other x-ray imaging systems having imaging gantries with different geometries. In some embodiments, a mobile shuttle330may be used for mounting one or more robotic arms301proximate to an x-ray imaging system having a C-arm type gantry, or to imaging devices utilizing different imaging modalities (e.g., MRI, PET, SPECT, ultrasound, etc.). In general, a mobile shuttle330according to various embodiments may include a mobile base that may be moved adjacent to an imaging device such that a support element supported by the mobile base extends at least partially over a gantry of the imaging system, and a base end of at least one robotic arm is mounted to the support element. Further, in addition to imaging systems used diagnostic imaging of a human patient, a mobile shuttle330in various embodiments may also be configured for mounting at least one robotic arm301proximate to an imaging system used for veterinary imaging or for industrial/commercial applications, such as part inspection and assembly. FIG.5illustrates an embodiment of a mounting apparatus501for a robotic arm301located on a portion of an imaging system303. The robotic arm301may be similar or identical to the robotic arm301described above. The imaging system303in this embodiment includes an elongated base520that may be fixed to a weight-bearing surface (e.g., the floor), a support post522that extends vertically from the base520and an imaging gantry524that is attached to the support post522on one side such that the gantry524is supported in a cantilevered manner. A patient table560is located adjacent to the imaging system303, and includes a bed portion561for supporting a patient during an imaging scan. In some embodiments, the bed portion561may pivot with respect to a linkage member562, and the linkage member562may pivot with respect to a base portion563that is fixed to the floor to enable the bed portion561to be raised and lowered with respect to the floor and/or to change the tilt angle of the bed portion561with respect to the floor. The gantry524may be translatable in a vertical direction along the length of the support post522to raise and lower the gantry524relative to the floor, and the gantry524may also be rotatable with respect to the support post522to modify the tilt axis of the gantry524. In embodiments, the support post522and gantry524may be translatable in a horizontal direction along the length of the base520to perform an imaging scan (e.g., a helical x-ray CT scan) of a patient lying on the patient table560. The mounting apparatus501for the robotic arm301may include a base portion540that is located on the base520of the imaging system303. A support member550may extend from the base portion540over the top surface of the patient table560and at least partially above a patient supported thereon. The robotic arm301may be mounted to the support member550. As shown inFIG.5, a bracket member551may be located on the support member550, and the robotic arm301may be mounted to the bracket member551. In some embodiments, the bracket member551may be slideable along the length of the support member550to adjust the position of the robotic arm301. The support member550may include a curved portion550a(e.g., curved rail) that extends over the patient table560and a straight portion550bproximate to the base portion540. In embodiments, the straight portion550bmay extend and retract into a housing in the base portion540so that the support member550may be raised and lowered in the direction of arrow504. The support member550may be raised and lowered in conjunction with the raising and lowering of the patient table560and/or the gantry524of the imaging system100. The support member550may be raised and lowered manually and/or using a motorized system that may be located within the base portion540. The support member550may be fixed in place when it is raised or lowered to a desired height. In addition to a curved support member550, in some embodiments the support member550may comprise one or more straight segments (e.g., rail segments), where at least a portion of the support member550may extend over the top surface of the patient table560and at least partially above a patient supported thereon. In embodiments, the support member550may also be rotatable with respect to the base portion540in the direction of arrow506, as shown inFIG.5. For example, the straight portion550bof the support member550may extend through a cover552that may rotate with respect to the base portion540on a rotary bearing553. This may enable the support member550to be rotated out of the way of the patient table560and patient when the robotic arm301is not needed. In embodiments, the base portion540may be weighted to provide stability to the robotic arm301attached to the support member550. The base portion540may enclose electronic circuitry and/or processor(s) used to control the operation of the robotic arm301. One or more connections for power and/or data may extend over or through the support member550and may connect the robotic arm301to a control system (e.g., computing device) and/or a power supply that may be located in the base portion540. The base portion540may be permanently fixed to the base520of the imaging system100or may be removably mounted to the base520. For example, the mounting apparatus501may be moved using a mobile cart or shuttle (not illustrated) and may be lifted from the mobile cart/shuttle and placed onto the base520of the imaging system303. In embodiments, the base portion540may be clamped or otherwise fixed in place on the base520. In some embodiments, the mounting apparatus501may be moveable along the length of the base520. For example, the base portion540of the mounting apparatus501may include one or more bearing elements (e.g., rollers or sliders) that engage with a bearing surface on the base520of the imaging system303and may enable the mounting apparatus to translate along the length of the base520in the direction of arrow508. A drive mechanism may be mounted inside or beneath the base portion540to drive the translation of the mounting apparatus501along the base520. In some embodiments, the mounting apparatus501may not include a drive system for translating the mounting apparatus501. The base portion540of the mounting apparatus501may be mechanically coupled to the support post522of the imaging system303, such as via one or more rigid spacers (not illustrated) that may extend along the length of the base520. The spacer(s) may enable the separation distance between the mounting apparatus501and the support post522to be adjusted. The translation of the support post522along the base520may drive the translation of the mounting apparatus501to which it is attached. A system such as shown inFIG.5may be utilized for performing a variety of different diagnostic and treatment methods. In some embodiments, the system may be used for robot-assisted interventional radiology procedures. For example, the end effector of the robotic arm301may include or may hold an invasive surgical tool, such as a biopsy needle, that may be inserted into the body of a patient on the patient table560. The imaging system303may obtain images of the patient (e.g., CT scans, such as CT fluoroscopic scans) that may be used to guide the insertion of and confirm the position of an invasive tool or instrument. As shown inFIG.5, in some embodiments the robotic arm301may be extended to a position that is at least partially within the bore307of the imaging gantry524. In embodiments, a system as shown inFIG.5may be used for an image-guided surgical procedure, and may include a sensing device (e.g., a camera array) for tracking the relative positions and orientations of various objects within the surgical space. A motion tracking device (e.g., camera array) may be mounted to the mounting apparatus501, such as on the support member550(e.g., curved rail), similar to the embodiments ofFIGS.1A-1C,3and4A-4Ddescribed above. Alternately, the motion tracking device may be mounted to the imaging system303, the patient table360or to a separate cart. FIGS.6A-6Billustrate a further embodiment of a mounting apparatus601for a robotic arm301used for robotically assisted surgery. The mounting apparatus601may be a mobile apparatus (i.e., a cart or shuttle) that may be used to position the arm301for performing a surgical procedure as well as for transport and/or storage of the robotic arm301. The mounting apparatus601in this embodiment includes a base portion602having a plurality of wheels603, a pair of support arms605a,605bextending upwards from the base portion602, and a support member607extending between the support arms605a,605b. At least one robotic arm301may be attached to the support member607. The support member607may be a curved rail to which the robotic arm301is attached. The mounting apparatus601may be similar to a mobile shuttle330such as described above with reference toFIGS.1A-4D, although the support member607in this embodiment does not extend over an outer surface of a gantry of an imaging system. The mounting apparatus601may have a smaller profile (e.g., height and/or width dimension) than the mobile shuttle330as shown inFIGS.1A-4D. The mounting apparatus601may be positioned adjacent to a patient table660. The patient table660may be an operating table, such as a Jackson table as shown inFIGS.6A and6B. The mounting apparatus601and the robotic arm301may be utilized with or without an imaging device located in the operating theater.FIG.6Aillustrates the mounting apparatus601and robotic arm301used to perform robotically-assisted image guided surgery without an intra-operative imaging system.FIG.6Billustrates the mounting apparatus601and robotic arm301used to perform robotically-assisted image guided surgery with an imaging system303(e.g., an O-arm® system, a C-arm system, etc.) located in the operating theater. The imaging system303may approach the patient from the side of the patient table660to obtain images of the patient. The mounting apparatus601may be positioned adjacent to an end of the patient table660such that the robotic arm301may extend from the mounting apparatus601along the length of the patient table660to the surgical area. The base portion602may include a pair of spaced-apart foot sections609,610extending parallel to one another. Wheels603(e.g., casters) may be located at the front and rear of each foot section609,610to enable transport of the mounting apparatus601. One or more stabilizers611may be extended from the bottom of each foot section609,610to contact the floor and maintain the mounting apparatus601in a fixed location. The stabilizers612may be extended from and retracted into the respective foot sections609,610manually (e.g., via a lever or foot pedal, for example). In some embodiments, a motorized system located in the foot sections609,610may drive the extension and retraction of the stabilizers. Alternately or in addition, the wheels603may be retracted into the foot sections609,610to lower the mounting apparatus601to the floor at a desired location. The support arms605a,605bmay extend from the rear of the base portion602and may extend upwards at an angle towards the front of the mounting apparatus601. An open region613may be defined between the foot sections609,610, the support arms605a,605band the support member607. In embodiments, the foot sections609,610and support arms605a,605bon either side of the mounting apparatus601may not be connected to one another except at the top of the mounting apparatus601(e.g., via the support member607). This may enable the mounting apparatus601to be positioned over a patient table660such that the mounting apparatus601may at least partially straddle the patient table660, such as shown inFIGS.6A and6B. The open region613may be designed to accommodate a wide variety of different types of patient tables in a “straddle” configuration. The open region613may also accommodate other devices within the operating theater, such as an anesthesia machine and/or a Mayo stand. In embodiments, the open region613may have a width of at least about 32 inches and a height of at least about 50 inches. The foot sections609,610and/or the support arms605a,605bmay be weighted to provide stability to the robotic arm301attached to the support member607. One or more housings may be formed in foot section(s)609,610and/or support arm(s)605a,605bfor enclosing electronic circuitry and/or processor(s) used to control the operation of the robotic arm301and/or for performing image guided surgery/surgical navigation. One or more connections for power and/or data may extend over or through the support member607and along one or both support arms605a,605band may connect the robotic arm301to a control system614(e.g., computing device) and/or a power supply located in the mounting apparatus601. As shown inFIGS.6A-6B, a bracket member651may be located on the support member607, and the robotic arm301may be mounted to the bracket member651. In some embodiments, the bracket member651may be slideable along the length of the support member607to adjust the position of the robotic arm301. A support arm621for an optical sensor device311(e.g., multi-camera array) of a motion tracking system305may be located on the mounting apparatus601. The support arm621may be mounted to the support member607, and may be attached to the bracket member651as shown inFIGS.6A-6B. In this embodiment, the support arm621incudes a plurality (e.g., two) rigid segments622connected by joint(s)623(e.g., ball joints). The user may adjust the position and orientation of the optical sensor device111by articulating the rigid segments622on the joints623. The support arm621may include features that hold the optical sensor device311in a desired pose during a surgical procedure. The support arm621may also be folded into compact configuration for ease of transport of the mounting apparatus601. FIG.7illustrates a further embodiment of a mounting apparatus701for a robotic arm301used for robotically assisted surgery. The mounting apparatus701may be a mobile apparatus that may be used to position the arm301for performing a surgical procedure as well as for transport and/or storage of the robotic arm301. The mounting apparatus701may include a base703and a support arm705extending from the base703, where the robotic arm301may be secured to the support arm705.FIG.7illustrates the mounting apparatus701located at an end of a patient table707such that the robotic arm301may extend from the mounting apparatus701to a patient located on the patient table707. An optical sensor device (e.g., multi-camera array) of a motion tracking system may also be attached to the mounting apparatus701, as described above. A power supply and other electrical components (e.g., computer(s)) may be housed within the base703of the mounting apparatus701. The mounting apparatus701may be a wheeled cart that includes wheels on the base703to enable the mounting apparatus701to be moved across the floor. Alternately or in addition, a separate shuttle device (not illustrated) may be utilized to transport the mounting apparatus701to a desired location and leave it in a fixed position (e.g., by lowering it to the floor). The shuttle device may then be moved away from the mounting apparatus701. After use, the shuttle device may be used to lift the mounting apparatus701from the floor for transport to another location. The mounting apparatus701in the embodiment ofFIG.7may be relatively small and lightweight in comparison to conventional carts for surgical robotic arms. This may enable easier transportation of the mounting apparatus701and robotic arm301and may reduce the space in the operating room occupied by the surgical robot and its support structure. In embodiments, the mounting apparatus701may also include an anchoring apparatus709that may be deployed for the purposes of providing greater stability to the mounting apparatus701and robotic arm301. The anchoring apparatus709may comprise one or more plate-shaped elements that may be pivoted downward from the base703to lie flat against the floor. Weight may be provided on the top surface of the anchoring apparatus709to provide additional ballast and improve the stability of the mounting apparatus701and robotic arm301. In one embodiment, the weight may be provided by moving a mobile imaging device303or another heavy item of equipment in the operating theater over the anchoring apparatus709. Further embodiments include a mobile mounting apparatus for a surgical robotic arm that includes a docking system for mating with pre-installed feature(s) in the floor of the operating room.FIG.8illustrates a first embodiment of a mounting apparatus801that is a mobile cart having a base803with wheels804and a support arm805extending from the base803to which a robotic arm301is attached. The support arm805may be hinged so that the robotic arm301may be pivoted upwards to a raised position above a patient table860, as shown inFIG.8. The support arm805and robotic arm301may be pivoted downwards towards the base803to improve stability of the cart during transport. An optical sensor device (e.g., multi-camera array) of a motion tracking system may also be attached to the mounting apparatus801, as described above. A power supply and other electrical components (e.g., computer(s)) may be housed within the base803of the mounting apparatus801. The docking system807in this embodiment includes a first docking element809that is extended from the bottom surface of the mounting apparatus801and a second docking element811that is located on and/or within the floor. The second docking element811may be a socket that is pre-installed in the floor of the operating theater. The second docking element811may be pre-installed in a select location of an operating room, such as adjacent to a fixed surgical table860or beneath overhead surgical lighting or ventilation system(s). A plurality of second docking elements811may be pre-installed in selected locations around the operating room. The first docking element809may be a threaded connector that may be extended from the bottom of the base803(e.g. via a motor or a foot pedal or other mechanical means) and into the second docking element811. The second docking element811may have corresponding threads which engage with the threads of the first docking element809to mechanically couple the first and second docking elements809,811. In embodiments, once the docking system807is engaged, the first docking element809may be retracted back towards the base803of the mounting apparatus801to take up any play between the first and second docking elements809,811and provide increased stability to the mounting apparatus801. In some embodiments, the wheels804of the mounting apparatus801may retract into the base804in coordination with the extension of the first docking element809so that the mounting apparatus801may be lowered to the floor as the docking system807is engaged. In embodiments, the second docking element811may be countersunk to facilitate engagement with the first docking element809. The docking system807may also include additional features, such as mechanical, optical and/or electromagnetic features to ensure that the base803of the mounting apparatus801is properly aligned over the second docking element811before the first docking element809is extended. In some embodiments, the docking system807may include connections for power and/or data to and/or from the mounting apparatus801. The docking system807may be disengaged by actuating a release mechanism (e.g., a button, foot pedal, etc.) that causes the first docking element809and the second docking element811to disconnect from one another so as to enable the mounting apparatus801to be transported and/or re-positioned. In preferred embodiments, when the docking system807is disengaged, the second docking element811may be substantially flush with the floor surface and does not interfere with medical personnel or other equipment within the operating room. Although the embodiment ofFIG.8illustrates a docking mechanism807having a connector that extends from the mounting apparatus801to engage with a socket in the floor, it will be understood that the docking mechanism807may include a connector that extends from the floor to engage with the mounting apparatus801. FIG.9illustrates an alternative embodiment of a mobile mounting apparatus for a surgical robotic arm that docks with pre-installed feature(s) in the operating room floor. In this embodiment, the mounting apparatus901may be moved using a separate shuttle device (not illustrated). The mounting apparatus901may be lowered onto or slid into a floor mount902that may be pre-installed in the floor of the operating room. The mounting apparatus901and/or floor mount902may have mating features to facilitate alignment and a locking mechanism that engages to lock the mounting apparatus901into position on the floor. The mounting apparatus901in this embodiment includes a base903and a boom arm905that is able to swivel with respect to the base903, as shown inFIG.9. A robotic arm301and an optical sensor device311of a motion tracking system may be attached to the boom arm905. In some embodiments, the height of the boom arm905with respect to the base903may be adjustable. FIG.10illustrates a further embodiment of a mounting apparatus1001for a robotic arm that includes a base1003and a support arm1005extending from the base803to which a robotic arm301is attached. An optical sensor device (e.g., multi-camera array) of a motion tracking system may also be attached to the mounting apparatus1001, as described above. A power supply and other electrical components (e.g., computer(s)) may be housed within the base1003of the mounting apparatus1001. The mounting apparatus1001ofFIG.10may be similar to the mounting apparatus801described above with reference toFIG.8. However, the mounting apparatus1001ofFIG.10is not a wheeled cart and may be moved using a separate shuttle device (not illustrated). In addition, the mounting apparatus1001may be positioned adjacent to an end of a patient table. The mounting apparatus1001ofFIG.10may include a docking mechanism1007that includes a first docking element1009(e.g., a threaded connector) that engages with a second docking element1011(e.g., socket) that is located on and/or within the floor to secure the mounting apparatus1001to the floor. In some embodiments, multiple mounting apparatuses801,901,1001as described above may be docked with pre-installed docking features located at various locations in the operating room floor. Various items used during surgery, such as robotic arm(s), surgical instrument(s), instrument tray(s), camera(s), light source(s), monitor screen(s), etc., may be mounted to the mounting apparatuses801,901,1001. In embodiments, multiple mounting apparatuses801,901,1001may be bridged by one or more spanning members (e.g., cross-bar(s), truss(es), etc.) that may extend over or adjacent to the surgical area, and one or more items, such as robotic arm(s), surgical instrument(s), instrument tray(s), camera(s), lighting, monitor screen(s), etc., may be suspended from a spanning member. Further embodiments include a table mount for a surgical robotic arm. A table mount approach may minimize the size and footprint of the mounting apparatus used to mount a surgical robotic arm while enabling the robotic arm to be located in an advantageous position for performing robotically-assisted surgery. For example, a robotic arm mounted to the surgical table may have a closer physical connection and relationship to the patient, so that the robotic arm may better follow or accommodate motion of the patient. A table mount according to various embodiments may enable the robotic arm to be mounted along the edge of the patient table (i.e., along the side of the patient), at an end of the table (i.e., at the head or foot of the patient), and/or above the patient, as described in further detail below. In some embodiments, a table mount may be movable with respect to the patient table (e.g., slidable along the length of the patient table) to adjust the position of the robotic arm on the table. FIGS.11A-11Eillustrate a first embodiment of a table mount1101for mounting at least one robotic arm301to a surgical table1160. In this embodiment, the table mount1101may be used to mount a robotic arm301to a side1102of the table1160and/or to an end1104of the table1160. It will be understood that in some embodiments, a table mount1101may be configured to mount a robotic arm301only to the side1102or to the end1104of the table1160. As shown inFIG.11A, the table mount1101may include a generally flat plate member1103that may be placed on a surgical tabletop. One or more raised platforms1105a,1105bmay extend from a periphery of the plate member1103. The raised platforms1105a,1105bmay be may be cantilevered from the side1102or end1104of the surgical table1160. The raised platforms1105a,1105bmay include a mounting surface1107for attaching a robotic arm301, as shown inFIGS.11B-11E. The mounting surface1107may optionally be angled toward the surface of the patient table1160, as shown inFIGS.11B-11E. In embodiments, the plate member1103may be placed across the width of the surgical tabletop1108, and one or more tabletop pads1110may be placed over the top surface of the plate member1103. The weight of the patient on the tabletop pad1110and plate member1103may provide additional ballast to improve the stability of the table mount1101and robotic arm301. The raised platforms1105a,1105bmay be integrally formed with or permanently mounted to the plate member1103, or alternately, the raised platforms1105a,1105bmay be removable from the plate member1103. For example, as shown inFIGS.11B and11C, raised platform1105bmay be removed from the table mount1101and a robotic arm301may be mounted to the side1102of the table1160on raised platform1105a. InFIGS.11D and11E, raised platform1105amay be removed from the table mount1101and a robotic arm301may be mounted to the end1104of the table1160on raised platform1105b. The table mount1101may be attached to a surgical table1160using a clamping mechanism that may clamp the table mount1101across the width of the surgical table1160. In various embodiments, the table mount1101may be designed for use with different types of surgical tables that may vary in terms of structural features and/or dimensions of the surgical table. Thus, a universal or semi-universal design for a table mount1101may be utilized. As shown inFIG.11C, the plate member1103may have a first projection1111that extends from the bottom surface of the plate member1103and abuts against a structural element of the surgical table1160(e.g., a side surface of the table1160, a side rail, etc.). Opposite the first projection1111, the plate member1103may include a reciprocating portion1113that may include alignment features (e.g., rods1115) that slide within openings1117in the plate member1103. The reciprocating portion1113may be moved towards or away from the rest of the plate member1103to adjust the width of the plate member1103. The reciprocating portion1113may have a second projection1119(seeFIG.11C) that extends from the bottom surface of the reciprocating portion1113and may abut against a structural element (e.g., the side surface of the table, a side rail, etc.) on the opposite side of the table1160. The plate member1103may be fastened against opposite sides of the table1160by turning a knob1121(seeFIG.11A) so that a threaded connector extending through the reciprocating portion1113becomes engaged within a threaded opening1119in the plate member1103. In addition to mounting a robotic arm301as shown inFIGS.11B-11E, a table mount1101as described above may be used to mount other items to a surgical table, such as surgical tools, instrument trays, cameras, monitors/displays, light sources, etc. FIGS.12A-12Eillustrate a second embodiment of a table mount1201for mounting at least one robotic arm301to a surgical table1160. In this embodiment, the table mount1201may include a bridge section1202that extends over the top surface of the surgical table1160. At least one robotic arm301may be mounted to the bridge section1202. As shown inFIG.12A, the table mount1201may include a generally flat plate member1203that may be placed on a surgical tabletop1108. As in the embodiment ofFIGS.11A-11E, the plate member1203may be placed across the width of the surgical tabletop1108, and one or more tabletop pads1110may be placed over the top surface of the plate member1103. The weight of the patient on the tabletop pad1110and plate member1203may provide additional ballast to improve the stability of the table mount1201and robotic arm301. The table mount1201may include a clamping mechanism that fastens the table mount1201across the width of the surgical table1160. A reciprocating portion1213of the plate member1203may enable the table mount to be adjusted to accommodate different table widths. As shown inFIG.12E, a first projection1211may extend from the bottom surface of the plate member1203and abut against a structural element of the surgical table1160. A second projection1219may extend from the bottom surface of the reciprocating portion1213of the plate member1203and abut against a structural element on the opposite side of the table1160. The table mount1201may be clamped to the table1160by turning a knob1221(seeFIG.12A) or similar mechanism that tightens the first and second projections1211,1219against opposite sides of the table1160. The bridge section1202in this embodiment include a set of four vertical support members1204extending from the plate member1203and a mounting surface1206supported above the table1160by the support members1204. In this embodiment, the mounting surface1206has an arch shape, although it will be understood that the mounting surface may be flat. Two of the support members1204may extend through openings1208in the periphery of the plate member1203. A plurality of fasteners1210(e.g., nuts) may be used to secure the support members1204within the openings1208. The other two support members1204may extend through slots1212in the reciprocating portion1213of the plate member1203. A plurality of fasteners1210(e.g., nuts) may be used to secure the support members1204within the slots1212. The height of the mounting surface1206above the top surface of the table1160may be adjusted by varying the length of the support members1204extending above the plate member1203. FIGS.12B-12Eillustrate the table mount1201supporting a robotic arm301above the surgical table1160. A robotic arm301may be mounted at various positions on the mounting surface1206. In embodiments, the base end of the robotic arm301may be mounted to a moveable carriage (e.g., similar to carriage360shown inFIGS.1A-1D), and the carriage with the robotic arm301may be slideable over the bridge section1202to reposition the robotic arm301. A table mount1201as illustrated and described above may be used to mount other items to a surgical table, such as surgical tools, instrument trays, cameras, monitors/displays, light sources, etc. In embodiments, multiple table mounts1101,1201as shown and described above may be attached to a surgical table. A plurality of table mounts1101,1201may be bridged by one or more connecting members (e.g., cross-bar(s), truss(es), etc.) that may extend over or adjacent to the surgical table, and one or more items, such as robotic arm(s), surgical instrument(s), instrument tray(s), camera(s), lighting, monitor screen(s), etc., may be mounted to a connecting member. FIG.13illustrates another embodiment of a table mount for a surgical robotic arm. In this embodiment, the robotic arm301is mounted to a column50that supports the surgical table60above the floor. The mounting apparatus1301in this embodiment includes a support element1303that is fixed to the column50beneath the table60and a support arm1305that extends up from the support element1303above the surface of the table60. The robotic arm301may be mounted to the support arm1305. In embodiments, the mounting apparatus1301may be adjustable such that the robotic arm301may be mounted adjacent to either side or optionally to at least one end of the table60. The mounting apparatus1301may also be used to mount an optical sensor device for a motion tracking system. FIG.14illustrates an embodiment of a ceiling mount1400for a surgical robotic arm301. The ceiling mount1400includes a support member1401that extends vertically downwards from the ceiling. The base end304of the robotic arm301may be mounted to an attachment point1402on the support member1401such that the robotic arm301may extend to reach a patient on a surgical table60. In some embodiments, the height of the attachment point1402for the robotic arm301may be adjustable, such as by telescoping the support member1401towards or away from the ceiling. The attachment point1402may also be rotatable with respect to the support member1401. In some embodiments, the entire support member1401may be moveable along tracks on or within the ceiling. A support arm1407for an optical sensor device311of a motion tracking system may be also attached to the support member1401as shown inFIG.14. The foregoing method descriptions are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not necessarily intended to limit the order of the steps; these words may be used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular. The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. | 73,865 |
11858128 | While implementations are described herein by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean “including, but not limited to”. DETAILED DESCRIPTION During operation, an autonomous mobile device such as a robotic assistant (robot) may perform various tasks. These tasks may include one or more of executing a particular application or moving the robot. Applications may include particular computer programs comprising instructions, that when executed, perform various functions. For example, a first task may involve the user requesting the robot to play a particular song using a music application. In another example, a second task may involve the user requesting the robot to move to a different room. In another example, a third task may involve the user requesting the robot to initiate a video call and follow the user as they move throughout the home. The robot is capable of autonomous movement, allowing it to move from one location in the home to another without being “driven” or remotely controlled by the user or other human. The robot operates on electrical power provided by a power source, such as one or more rechargeable batteries. The electrical power may then be used to operate one or more motors that are used to move the robot from one location to another. Other motors may be used to move other parts, such as extending a mast, operating a manipulator arm, and so forth. In one implementation, a motor may comprise a three-phase brushless direct current (BLDC) motor that is coupled to a drive wheel. The robot may include two motors, one driving a left drive wheel and one driving a right drive wheel. By controlling the power provided to the respective wheels, the robot may be able to rotate, move forward, move backward, and so forth. During normal operation the robot may operate the motors to move from one location to another. This operation may include slowing and stopping in the normal course of movement. For example, pulse width modulation (PWM) techniques may be used to control the power to the motors during normal operation. During normal operation, when the robot slows down a PWM controller may decrease the power delivered to the motors and allow the robot to come to rest. However, it may be necessary to perform a rapid stop of the robot in some situations. The rapid stop should decelerate the robot quickly but without being so fast that the robot could topple or skid. The rapid stop should also prevent or reduce the potential for damage to internal components of the robot. A rapid stop may be initiated responsive to various stop conditions. For example, the stop conditions may include expected collision of the robot with an object, actual collision of the robot with an object, receipt of a command to rapidly stop movement of the robot, failure of one or more components of the robot, and so forth. In other implementations, other stop conditions may be determined, such as if rotation of a wheel is less than a threshold value. Described in this disclosure is a braking system that allows a device, such as a robot, to perform a rapid stop that is safe. The system “fails safe” in that it will immediately perform the rapid stop in the event a stop condition is signaled, and is able to operate even in the event of partial or complete loss of power. While the following descriptions may describe a single motor, it is understood that the system may operate with any number of motors. These motors may be single phase or multiple-phase. During normal operation of the robot, various subsystems or portions of subsystems provide an input to a multiple-input AND gate. So long as these inputs are in a “high” (above a threshold voltage) state the AND gate produces a high output. In the event of one or more stop conditions, at least one of these inputs transitions to a “low” (below the threshold voltage) state, resulting in the output from the multiple-input AND gate transitioning to a “low” state. The “low” state acts as a signal that operates various circuits in the braking system. Responsive to the signal, a motor cutoff circuit disconnects the motor from the power source, preventing current from flowing between the motor and the power source. This disconnect prevents further power from being provided to the motor. Even with the power source disconnected from the motor, the robot may continue moving due to inertia. Also responsive to the signal, a braking circuit begins to safely and in a controlled fashion dissipate the power produced by the motor. For example, with external power removed, the motor may act as a generator, converting motion into electrical power. The braking circuit may take the power produced by the motor and deliver it, at a predetermined rate, to one or more resistors. For example, the braking circuit may utilize a current regulator with the input being connected to the motor, the output being connected to a resistor, and the current regulator controlled by the signal. In some implementations, the power may be used to charge a battery, producing regenerative braking. By maintaining the dissipation of power at the predetermined rate the deceleration of the robot is controlled and predictable, preventing skidding or toppling. The controlled dissipation of power also prevents potential damage to components in the robot due to excessive current, voltage, and so forth. In some implementations, the braking circuit may utilize a braking clamp circuit. The braking clamp circuit may be configured to determine if a voltage produced by the motor exceeds a threshold value, and if so, directly connects the motor to a resistive load. The braking clamp circuit may operate in conjunction with the current regulators described above. When the voltage produced by the motor drops below the threshold value, the braking clamp circuit may disconnect the resistive load, with the remaining energy dissipated through the current regulators as described above. As the motor slows, and the resulting back electromotive force (EMF) produced by the motor drops below a threshold value, a stop circuit operates. In one implementation, the stop circuit may incorporate a relay that, during normal operation, energizes a coil that in turn prevents a connection between the terminals (or windings) of the motor. As the voltage drops below a threshold value, the coil is deenergized and the relay establishes a connection between the terminals of the motor, producing a short between these terminals. This short may retard further rotation of the shaft of the motor. For example, with a BLDC motor, a short between the terminals results in the motor's shaft being resistant to rotation. The connection between the terminals of the motor may include a fuse. In the event the motor is rotated by an external force, such as if the robot is pushed, the power produced by the rotation of the motor will open the fuse to prevent damage. When the stop condition is removed and a start or normal condition is obtained, the inputs to the multiple-input AND gate are again all “high”. As a result, the relay in the stop circuit is again energized, disconnecting the short between the motor's terminals. The braking circuit discontinues dissipation of power, and the motor cutoff circuit reconnects the motor to the power supply. The robot may now resume normal operation. By using the system described herein, the robot or other autonomous mobile device is able to more safely perform a rapid stop. Once a stop condition is determined, the latency of the system is extremely low due to the design of the circuitry. The ability of the system to operate even if there is a partial or complete failure of the power source further improves safety. Illustrative System FIG.1illustrates a system100in which a user102uses a robot104with a rapid braking system, according to some implementations. The robot104may include a battery(s)106to provide electrical power for operation of the robot104. The battery106may be rechargeable, allowing it to store electrical energy obtained from an external source. The techniques described in this disclosure may be applied to other types of power sources, including but not limited to: fuel cells, flywheels, capacitors, superconductors, wireless power receivers, and so forth. For example, instead of a battery106the robot104may use one or more supercapacitors to store electrical power for use. The robot104may include several subsystems, such as an application subsystem108, a mast subsystem110, a mobility subsystem112, a drive subsystem114, and so forth. In other implementations, other arrangements of subsystems may be present. These subsystems may be powered by the battery(s)106. The application subsystem108includes one or more hardware processors (processors)116. The processors116may comprise one or more cores. The processors116may include microcontrollers, systems on a chip, field programmable gate arrays, digital signal processors, graphic processing units, general processing units, and so forth. One or more clocks may provide information indicative of date, time, ticks, and so forth. The robot104may include one or more communication interfaces such as input/output (I/O) interfaces, network interfaces118, and so forth. The communication interfaces enable the robot104, or components thereof, to communicate with other devices or components. The I/O interfaces may comprise Inter-Integrated Circuit (I2C), Serial Peripheral Interface (SPI) bus, Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-212, and so forth. The network interfaces118may be configured to provide communications between the robot104and other devices such as other robots104, a docking station, routers, access points, and so forth. The network interfaces118may include devices configured to couple to personal area networks (PANs), local area networks (LANs), wireless local area networks (WLANS), wide area networks (WANs), and so forth. For example, the network interfaces118may include devices compatible with Ethernet, Wi-Fi, Bluetooth, Bluetooth Low Energy, ZigBee, and so forth. The I/O interface(s) may couple to one or more I/O devices120. The I/O devices120may include input devices such as one or more sensors122. The I/O devices120may also include output devices124such as one or more of a motor, light, speaker, display, projector, printer, and so forth. In some embodiments, the I/O devices120may be physically incorporated with the robot104or may be externally placed. Network interfaces118, sensors122, and output devices124are discussed in more detail below with regard toFIG.3. The robot104may also include one or more busses or other internal communications hardware or software that allow for the transfer of data between the various modules and components of the robot104. The application subsystem108of the robot104includes one or more memories126. The memory126may comprise one or more non-transitory computer-readable storage media (CRSM). The CRSM may be any one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, a mechanical computer storage medium, and so forth. The memory126provides storage of computer-readable instructions, data structures, program modules, and other data for the operation of the robot104. A few modules are shown stored in the memory126, although the same functionality may alternatively be implemented in hardware, firmware, or as a system on a chip (SoC). The memory126may store instructions, such as one or more task modules128, that may be executed at least in part by the one or more processors116. For example, the task modules128may comprise applications that perform various function such as placing a video call, following the user102as they move, playing audio content, presenting video content, and so forth. Additional modules that may be stored within the memory126are discussed below with regard toFIG.2. For example, the memory126may store, and the processor116may execute, a speech processing module that allows the user102to provide verbal comments to the robot104. In some implementations the robot104may include a mast subsystem110. The mast subsystem110may include an extensible mast that supports one or more I/O devices120. For example, the mast may provide physical support for one or more cameras, microphones, speakers, lights, image projectors, and so forth. In some implementations the movement of the mast or other devices associated with the mast may be included as outputs. For example, extension and retraction of the mast may be used to provide a particular output indicator to the user102. The mobility subsystem112includes one or more processors116. These may be of the same type as the processors116used in other subsystems or different. The processors116may comprise one or more cores. The processors116may include microcontrollers, systems on a chip, field programmable gate arrays, digital signal processors, graphic processing units, general processing units, and so forth. One or more clocks may provide information indicative of date, time, ticks, and so forth. The mobility subsystem112may also comprise one or more memories126comprising CRSM. In some implementations, the memory126may be the same as or may differ from the memory126of the application subsystem108. The mobility subsystem112may include one or more I/O devices120. For example, the mobility subsystem112may include sensors122used to detect or avoid collision with an object. The mobility subsystem112may include an autonomous navigation module130. The autonomous navigation module130may be implemented as one or more of dedicated hardware, instructions stored in the memory126and executed on one or more processors116, as instructions executed on an external device such as a server that is accessed via the network interfaces118, and so forth. The autonomous navigation module130may be configured to move the robot104. In some situations, the movement may be responsive to instructions directing movement of the robot104that are associated with a particular task. For example, the user102may issue a request to the robot104for the robot104to follow the user102. The request may be processed by the application subsystem108that sends instructions to the mobility subsystem112that directs the robot104to follow the user102. The autonomous navigation module130of the mobility subsystem112may use sensor data from the one or more sensors122to find the user102in the environment, determine a path to move the robot104, determine obstacles to be avoided, and so forth. The mobility subsystem112determines where and how the robot104is to be moved and provides instructions to the drive subsystem114. The drive subsystem114receives the instructions from the mobility subsystem112and proceeds to operate one or more motors. The drive subsystem114may include one or more processors116, I/O devices120, memory126(not shown), and a rapid braking system that includes a motor cutoff circuit132. The motor cutoff circuit132receives status signal(s)134from one or more subsystems such as the application subsystem108, the mast subsystem110, the mobility subsystem112, the drive subsystem114itself, or other subsystems. The status signal134may be indicative of a stop condition. For example, if the status signal134transitions from a “high” value (above a threshold voltage) to a “low” value (below the threshold voltage), the status signal134may be indicative of a stop condition. Continuing the example, the mast subsystem110may provide a status signal134that is a high value during normal operation, but transition to a low value in the event of a fault, such as a failure to retract the mast. The stop condition may result from the autonomous navigation module130determining an expected collision of the robot104with an object, sensors122determining an actual collision of the robot104with an object, the application subsystem108indicating receipt of a command to rapidly stop movement of the robot104, failure of one or more components of the robot104, and so forth. In one implementation, if any of the status signals134are indicative of a stop condition, the rapid braking system operates. Other stop conditions may also be determined, such as rotation of a wheel being less than a threshold value. During normal operation, when no stop condition is present, a motor control circuit136provides power to drive one or more motors138. For example, the motor control circuit136may be configured to deliver a particular voltage, provide a particular pulse pattern of power, deliver power to particular windings of the motor138at particular times, and so forth. During normal operation, the robot104may be stopped by commanding the motor control circuit136to cease providing power to the motor138. Responsive to a signal indicative of a stop condition, the motor cutoff circuit132disconnects the motors138from the battery106. For example, the motor cutoff circuit132may interrupt a path of current flow between the battery106, the motor control circuit136, and the motor138. As a result, no further power is delivered to the motor138. In some implementations the power to the motor control circuit136may be interrupted as well. The motor cutoff circuit132is discussed in more detail with regard toFIG.5. While the motor138is disconnected from the battery106, the inertia of the robot104may keep the robot104moving. This inertia may result in the drive wheels turning the motor138, which now may act like a generator that produces electric power. Responsive to the signal indicative of the stop condition, the braking circuit140operates to dissipate the power produced by the motor138. The braking circuit140may operate to dissipate this power at a predetermined rate. For example, a current regulator may be used to limit the current provided to a resistor that dissipates the produced power as heat. By limiting the dissipation of power to a predetermined rate, the robot104slows down at a predetermined rate. This brings the robot104to a rapid but controlled stop that avoids tipping or skidding. This also allows the robot104to rapidly slow down without significant wear and tear on the components. The braking circuit140is discussed in more detail with regard toFIG.6A. In some implementations, if the back electromotive force (EMF) produced by the motor138as a result of continuing rotation exceeds a threshold value, a braking clamp circuit may be used to dissipate additional energy. The braking clamp circuit is described in more detail with regard toFIG.6B. Once the back EMF is below a threshold value, the braking clamp circuit may deactivate, and the braking circuit140may continue to operate. Once the back EMF produced by the motor138is below a threshold value, a stop circuit142operates. At this point, the robot104has now slowed or is completely stopped. The stop circuit142operates to resist motion of the robot104. The stop circuit142may operate to connect the terminals (or windings) of the motor138under certain conditions. For example, the stop circuit142may comprise a relay that is energized during normal operation. When the back EMF drops below the threshold value, the coil is de-energized, completing the electrical circuit between the terminals of the motor138. By creating a short between the terminals of the motor138, further rotation of the shaft may be inhibited. A fuse may be placed in series with the relay to prevent potential damage to the robot104. For example, if the user102pushes the robot104while the stop circuit142has the terminals shorted, the motor138could generate sufficient power to damage components in the motor138. In this situation, the fuse would open, removing the short and preventing damage. The stop circuit142is discussed in more detail with regard toFIG.7. The various subsystems may be physically located on separate circuit boards. For example, each subsystem may comprise components that share a common rigid or flexible printed circuit board. In some implementations, the motor cutoff circuit132, the motor control circuit136, the braking circuit140, and the stop circuit142may be arranged on the same circuit board. Such placement improves reliability of the system by removing one or more connectors that could potentially fail. Each subsystem may have one or more dedicated I/O devices120. For example, the application subsystem108may include cameras, microphones, and so forth while the mobility subsystem112may include a LIDAR system, ultrasonic sensors, contact sensors, and so forth. Communication between the subsystems may utilize various technologies including, but not limited to, Ethernet, universal serial bus (USB), and so forth. For example, the application subsystem108may communicate with the mobility subsystem112using a USB connection. In some implementations, other communication paths, protocols, and so forth, may be used. For example, the application subsystem108may provide an interrupt pulse along a conductor that is connected to the mobility subsystem112. This interrupt pulse may be used to indicate when one of the subsystems is performing an operation that may affect the operation of the other at a specific time. Continuing the example, the application system108may use a sensor122that uses an illuminator that interferes with a sensor122used by the mobility subsystem112. By providing the interrupt pulse, the application subsystem108may notify the mobility subsystem112as to the illumination, allowing the mobility subsystem112to disregard the potentially erroneous data from that sensor122during the time associated with the interrupt pulse. This technique allows the two subsystems to operate in conjunction with one another, without the need to maintain synchronized timing between the subsystems. By removing the need for synchronized timing, operation of the robot104is simplified. In other implementations, other interrupt lines may be used to provide data indicative of a particular event or occurrence from one subsystem to another. The lines used to provide the status signals134may be provided using dedicated lines. For example, each application subsystem108may have a dedicated electrical conductor to the drive subsystem114that provides the status signal(s)134from that subsystem. The robot104may use the network interfaces118to connect to a network144. For example, the network144may comprise a wireless local area network, that in turn is connected to a wide area network such as the Internet. The robot104may access one or more servers146via the network144. For example, the robot104may utilize a wake word detection module to determine if the user102is addressing a request to the robot104. The wake word detection module may hear a specified word or phrase and transition the robot104or portion thereof to a particular operating mode. Once “awake”, the robot104may then transfer at least a portion of the audio spoken by the user102to the servers146for further processing. The servers146may process the spoken audio and return to the robot104data that may be subsequently used to operate the robot104. In some implementations, the speech processing for particular words or phrases may be handled locally. For example, as shown here the user102has said “robot, stop”. The speech processing module of the robot104may recognize the command to “stop” and generate a signal indicative of a stop condition. As a result, the rapid braking system described herein may be activated. The robot104may be configured to dock or connect to a docking station148. The docking station148may also be connected to the network144. For example, the docking station148may be configured to connect to the wireless local area network such that the docking station148and the robot104may communicate. Operation functionality of the docking station148is described in more detail below with regard toFIG.13. In other implementations, other types of an autonomous mobile device (AMD) may use the systems and techniques described herein. For example, the AMD may comprise an autonomous ground vehicle that is moving on a street, an autonomous aerial vehicle in the air, and so forth. FIG.2is a block diagram200of the robot104, according to some implementations. For ease of illustration, and not necessarily as a limitation, the overall system is shown without demarcation into the various subsystems. The robot104may include one or more batteries106or other power source to provide electrical power suitable for operating the components in the robot104. The power source may include batteries, capacitors, fuel cells, storage flywheels, wireless power receivers, and so forth. The robot104may include one or more hardware processor116(processors) configured to execute one or more stored instructions. The processor116may comprise one or more cores. The processor116may include microcontrollers, systems on a chip, field programmable gate arrays, digital signals processors, graphic processing units, general processing units, and so forth. One or more clocks202may provide information indicative of date, time, ticks, and so forth. For example, the processor116may use data from the clock202to associate a particular interaction with a particular point in time. The robot104may include one or more communication interfaces204such as input/output (I/O) interfaces206, network interfaces118, and so forth. The communication interfaces204enable the robot104, or components thereof, to communicate with other devices or components. The communication interfaces204may include one or more I/O interfaces206. The I/O interfaces206may comprise Inter-Integrated Circuit (I2C), Serial Peripheral Interface bus (SPI), Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-232, and so forth. The I/O interface(s)206may couple to one or more I/O devices120. The I/O devices120may include input devices such as one or more of a sensor122, keyboard, mouse, scanner, and so forth. The I/O devices120may also include output devices124such as one or more of a motor, light, speaker, display, projector, printer, and so forth. In some embodiments, the I/O devices120may be physically incorporated with the robot104or may be externally placed. The network interfaces118may be configured to provide communications between the robot104and other devices such as other robots104, the docking station148, routers, access points, and so forth. The network interfaces118may include devices configured to couple to personal area networks (PANs), local area networks (LANs), wireless local area networks (WLANS), wide area networks (WANs), and so forth. For example, the network interfaces118may include devices compatible with Ethernet, Wi-Fi, Bluetooth, Bluetooth Low Energy, ZigBee, and so forth. The robot104may also include one or more busses or other internal communications hardware or software that allow for the transfer of data between the various modules and components of the robot104. As shown inFIG.2, the robot104includes one or more memories126. The memory126may comprise one or more non-transitory computer-readable storage media (CRSM). The CRSM may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, a mechanical computer storage medium, and so forth. The memory126provides storage of computer-readable instructions, data structures, program modules, and other data for the operation of the robot104. A few example functional modules are shown stored in the memory126, although the same functionality may alternatively be implemented in hardware, firmware, or as a system on a chip (SoC). The memory126may include at least one operating system (OS) module208. The OS module208is configured to manage hardware resource devices such as the I/O interfaces206, the I/O devices120, the communication interfaces204, and provide various services to applications or modules executing on the processor116. The OS module208may implement a variant of the FreeBSD operating system as promulgated by the FreeBSD Project; other UNIX or UNIX-like variants; a variation of the Linux operating system as promulgated by Linus Torvalds; the Windows operating system from Microsoft Corporation of Redmond, Washington, USA; the Robot Operating System (ROS), and so forth. Also stored in the memory126may be a data store210and one or more of the following modules. These modules may be executed as foreground applications, background tasks, daemons, and so forth. The data store210may use a flat file, database, linked list, tree, executable code, script, or other data structure to store information. In some implementations, the data store210or a portion of the data store210may be distributed across one or more other devices including other robots104, servers146, network attached storage devices, and so forth. A communication module212may be configured to establish communication with other devices, such as other robots104, an external server146, a docking station148, and so forth. The communications may be authenticated, encrypted, and so forth. Other modules within the memory126may include a safety module214, a sensor data processing module216, the autonomous navigation module130, the one or more task modules128, a speech processing module218, or other modules220. The modules may access data stored within the data store210, such as safety tolerance data222, sensor data224, or other data226. The safety module214may access safety tolerance data222to determine within what tolerances the robot104may operate safely within the physical environment. For example, the safety module214may be configured to stop the robot104from moving when a carrying handle is extended. In another example, the safety tolerance data222may specify a minimum sound threshold which, when exceeded, stops all movement of the robot104. Continuing this example, detection of sound such as a human yell would stop the robot104. In another example, the safety module214may access safety tolerance data222that specifies a minimum distance from an object that the robot104must maintain. Continuing this example, when a sensor122detects an object has approached to less than the minimum distance, all movement of the robot104may be stopped. Movement of the robot104may be stopped by one or more of inhibiting operations of one or more of the motors138, issuing a command to stop motor138operation, disconnecting power from one or more the motors138, and so forth. The safety module214may be implemented as hardware, software, or a combination thereof. The safety module214may produce as output a status signal134that is used to control the rapid braking system. The sensor data processing module216may access sensor data224that is acquired from one or more the sensors122. The sensor data processing module216may provide various processing functions such as de-noising, filtering, change detection, and so forth. Processing of sensor data224, such as images from a camera344, may be performed by a module implementing, at least in part, one or more of the following tools or techniques. In one implementation, processing of the image data may be performed, at least in part, using one or more tools available in the OpenCV library as developed by Intel Corporation of Santa Clara, California, USA; Willow Garage of Menlo Park, California, USA; and Itseez of Nizhny Novgorod, Russia. In another implementation, functions available in the OKAO machine vision library as promulgated by Omron Corporation of Kyoto, Japan, may be used to process the sensor data224. In still another implementation, functions such as those in the Machine Vision Toolbox (MVTB) available using MATLAB as developed by Math Works, Inc. of Natick, Massachusetts, USA, may be utilized. Techniques such as artificial neural networks (ANNs), active appearance models (AAMs), active shape models (ASMs), principal component analysis (PCA), cascade classifiers, and so forth, may also be used to process the sensor data224or other data226. For example, the ANN may be a trained using a supervised learning algorithm such that object identifiers are associated with images of particular objects within training images provided to the ANN. Once trained, the ANN may be provided with the sensor data224and produce output indicative of the object identifier. The autonomous navigation module130provides the robot104with the ability to navigate within the physical environment without real-time human interaction. For example, the autonomous navigation module130may implement one or more simultaneous localization and mapping (“SLAM”) techniques to determine an occupancy map or other representation of the physical environment. The SLAM algorithms may utilize one or more of maps, algorithms, beacons, or other techniques to provide navigational data. The navigational data may then be used to determine a path which is then subsequently used to determine a set of commands that drive the motors138connected to the wheels. For example, the autonomous navigation module130may access environment map data during operation to determine a relative location, estimate a path to a destination, and so forth. The autonomous navigation module130may include an obstacle avoidance module. For example, if an obstacle is detected along a planned route, the obstacle avoidance module may re-route the robot104to move around the obstacle or take an alternate route. The autonomous navigation module130may produce as output a status signal134that is used to control the rapid braking system. For example, if the autonomous navigation module130detects an imminent or actual collision with an object, the status signal134may be provided that operates the circuitry of the rapid braking system. The autonomous navigation module130may utilize various techniques during processing of sensor data224. For example, image data obtained from cameras may be processed to determine one or more of corners, edges, planes, and so forth. In some implementations corners may be detected and the coordinates of those corners may be used to produce point cloud data. The occupancy map may be manually or automatically determined. Continuing the example, during a learning phase, or subsequent operation, the robot104may generate an occupancy map that is indicative of locations of obstacles such as chairs, doors, stairwells, and so forth. In some implementations, the occupancy map may include floor characterization data. The floor characterization data is indicative of one or more attributes of the floor at a particular location within the physical environment. During operation of the robot104, floor characterization data may be obtained. The floor characterization data may be utilized by one or more of the safety module214, the autonomous navigation module130, the task module128, or other modules220. For example, the floor characterization data may be used to determine if an unsafe condition occurs such as a wet floor. In another example, the floor characterization data may be used by the autonomous navigation module130to assist in the determination of the current location of the robot104within the home. The memory126may store one or more task module128. A task module128comprises instructions that, when executed, provide one or more functions associated with a particular task. In one example, the task may comprise a security or watchmen task in which the robot104travels throughout the physical environment looking for events that exceed predetermined thresholds. Continuing the example, if the robot104detects that the ambient temperature is below a minimum level, or that water is present on the floor, or detects the sound of broken glass, an alert may be generated. The alert may be given as an audible, visual, or electronic notification. For example, the electronic notification may involve the robot104transmitting data using one or more of the communication interfaces204. In another example, the task may comprise a “follow me” feature in which the robot104follows a user102. For example, the user102may participate in a video call using the robot104. The camera on the mast may be used to acquire video for transmission while the display is used to present video that is received. The robot104may use data from one or more sensors122to determine a location of the user102relative to the robot104, and track and follow the user102. In one implementation, computer vision techniques may be used to locate the user102within image data acquired by the cameras. In another implementation, the user's voice may be detected by an array of microphones, and a direction to the voice with respect to the robot104may be established. Other techniques may be utilized either alone or in combination to allow the robot104to track a user102, follow a user102, or track and follow a user102. The path of the robot104as it follows the user102may be based at least in part on one or more of constraint cost values. For example, while the robot104is following the user102down the hallway, the robot104may stay to the right side of the hallway. In some situations, while following a user102the robot104may disregard some rules or may disregard the speed values for a particular area. For example, while following the user102the robot104may not slow down while passing a doorway. In yet another example, the task may allow for the robot104to be summoned to a particular location. The user102may utter a voice command that is heard by a microphone on the robot104, a microphone in a smart phone, or another device with a microphone such as a network enabled speaker or television. Alternatively, the user102may issue a command using an app on a smartphone, wearable device, tablet, or other computing device. Given that the location of the device at which the command was obtained is known, the robot104may be dispatched to that location. Alternatively, if the location is unknown, the robot104may search for the user102. The speech processing module218may be used to process utterances of the user102. Microphones may acquire audio in the presence of the robot104and may send raw audio data230to an acoustic front end (AFE). The AFE may transform the raw audio data230(for example, a single-channel, 16-bit audio stream sampled at 16 kHz), captured by the microphone, into audio feature vectors232that may ultimately be used for processing by various components, such as a wakeword detection module234, speech recognition engine, or other components. The AFE may reduce noise in the raw audio data230. The AFE may also perform acoustic echo cancellation (AEC) or other operations to account for output audio data that may be sent to a speaker of the robot104for output. For example, the robot104may be playing music or other audio that is being received from a network144in the form of output audio data. To avoid the output audio interfering with the device's ability to detect and process input audio, the AFE or other component may perform echo cancellation to remove the output audio data from the input raw audio data230, or other operations. The AFE may divide the audio data into frames representing time intervals for which the AFE determines a number of values (i.e., features) representing qualities of the raw audio data230, along with a set of those values (i.e., a feature vector or audio feature vector) representing features/qualities of the raw audio data230within each frame. A frame may be a certain period of time, for example a sliding window of 25 ms of audio data taken every 10 ms, or the like. Many different features may be determined, as known in the art, and each feature represents some quality of the audio that may be useful for automated speech recognition (ASR) processing, wakeword detection, presence detection, or other operations. A number of approaches may be used by the AFE to process the raw audio data230, such as mel-frequency cepstral coefficients (MFCCs), log filter-bank energies (LFBEs), perceptual linear predictive (PLP) techniques, neural network feature vector techniques, linear discriminant analysis, semi-tied covariance matrices, or other approaches known to those skilled in the art. The audio feature vectors232(or the raw audio data230) may be input into a wakeword detection module234that is configured to detect keywords spoken in the audio. The wakeword detection module234may use various techniques to determine whether audio data includes speech. Some embodiments may apply voice activity detection (VAD) techniques. Such techniques may determine whether speech is present in an audio input based on various quantitative aspects of the audio input, such as the spectral slope between one or more frames of the audio input; the energy levels of the audio input in one or more spectral bands; the signal-to-noise ratios of the audio input in one or more spectral bands; or other quantitative aspects. In other embodiments, the robot104may implement a limited classifier configured to distinguish speech from background noise. The classifier may be implemented by techniques such as linear classifiers, support vector machines, and decision trees. In still other embodiments, Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM) techniques may be applied to compare the audio input to one or more acoustic models in speech storage, which acoustic models may include models corresponding to speech, noise (such as environmental noise or background noise), or silence. Still other techniques may be used to determine whether speech is present in the audio input. Once speech is detected in the audio received by the robot104(or separately from speech detection), the robot104may use the wakeword detection module234to perform wakeword detection to determine when a user102intends to speak a command to the robot104. This process may also be referred to as keyword detection, with the wakeword being a specific example of a keyword. Specifically, keyword detection is typically performed without performing linguistic analysis, textual analysis, or semantic analysis. Instead, incoming audio (or audio data) is analyzed to determine if specific characteristics of the audio match preconfigured acoustic waveforms, audio signatures, or other data to determine if the incoming audio “matches” stored audio data corresponding to a keyword. Thus, the wakeword detection module234may compare audio data to stored models or data to detect a wakeword. One approach for wakeword detection applies general large vocabulary continuous speech recognition (LVCSR) systems to decode the audio signals, with wakeword searching conducted in the resulting lattices or confusion networks. LVCSR decoding may require relatively high computational resources. Another approach for wakeword spotting builds HMMs for each key wakeword word and non-wakeword speech signals respectively. The non-wakeword speech includes other spoken words, background noise, etc. There can be one or more HMMs built to model the non-wakeword speech characteristics, which are named filler models. Viterbi decoding is used to search the best path in the decoding graph, and the decoding output is further processed to make the decision on keyword presence. This approach can be extended to include discriminative information by incorporating a hybrid deep neural network (DNN) Hidden Markov Model (HMM) decoding framework. In another embodiment, the wakeword spotting system may be built on DNN/recursive neural network (RNN) structures directly, without HMM involved. Such a system may estimate the posteriors of wakewords with context information, either by stacking frames within a context window for DNN, or using RNN. Following on, posterior threshold tuning or smoothing is applied for decision making. Other techniques for wakeword detection, such as those known in the art, may also be used. Once the wakeword is detected, circuitry or applications of the local robot104may “wake” and begin transmitting audio data236(which may include one or more audio feature vectors232or the raw audio data230) to one or more server(s)146for speech processing. The audio data236corresponding to audio obtained by the microphone may be sent to a server146for routing to a recipient device or may be sent to the server146for speech processing for interpretation of the included speech (either for purposes of enabling voice-communications and/or for purposes of executing a command in the speech). The audio data236may include data corresponding to the wakeword, or the portion of the audio data236corresponding to the wakeword may be removed by the local robot104prior to sending. In some implementations, a particular wakeword or phrase may be associated with a stop condition. For example, the phrase “emergency stop” may be a wakeword that results in the application subsystem108producing a status signal134indicative of a stop condition. As a result, the use of this phrase may result in the robot104operating the rapid braking system. The robot104may connect to the network144using one or more of the network interfaces118. One or more servers146may provide various functions, such as ASR, natural language understanding (NLU), providing content such as audio or video to the robot104, and so forth. The other modules220may provide other functionality, such as object recognition, speech synthesis, user identification, and so forth. For example, an ASR module may accept as input raw audio data230or audio feature vectors232and may produce as output a text string that is further processed and used to provide input to a task module128, and so forth. In one implementation, the text string may be sent via a network144to a server146for further processing. The robot104may receive a response from the server146and present output, perform an action, and so forth. For example, the raw audio data230may include the user102saying “robot,104go to the dining room”. The audio data representative of this utterance may be sent to the server146that returns commands directing the robot104to the dining room of the home associated with the robot104. The utterance may result in a response from the server146that directs operation of other devices or services. For example, the user102may say “robot wake me at seven tomorrow morning”. The audio data236may be sent to the server146that determines the intent and generates commands to instruct a device attached to the network144to play an alarm at 7:00 am the next day. The other modules220may comprise a speech synthesis module that is able to convert text data to human speech. For example, the speech synthesis module may be used by the robot104to provide speech that a user102is able to understand. The data store210may also store additional data such as user identifier data that is indicative of the user identifier of a user102associated with the robot104. For example, one or more of the raw audio data230or the audio feature vectors232may be processed to determine the user identifier data of a user102based on the sound of the user's voice. In another implementation an image of the user102may be acquired using one or more cameras and processed using a facial recognition system to determine the user identifier data. The data store210may store other data226such as user preference data. FIG.3is a block diagram300of some components of the robot104such as network interfaces118, sensors122, and output devices124, according to some implementations. The components illustrated here are provided by way of illustration and not necessarily as a limitation. For example, the robot104may utilize a subset of the particular network interfaces118, output devices124, or sensors122depicted here, or may utilize components not pictured. The network interfaces118may include one or more of a WLAN interface302, PAN interface304, secondary radio frequency (RF) link interface306, or other interface308. The WLAN interface302may be compliant with at least a portion of the Wi-Fi specification. For example, the WLAN interface302may be compliant with at least a portion of the IEEE 802.11 specification as promulgated by the Institute of Electrical and Electronics Engineers (IEEE). The PAN interface304may be compliant with at least a portion of one or more of the Bluetooth, wireless USB, Z-Wave, ZigBee, or other standards. For example, the PAN interface304may be compliant with the Bluetooth Low Energy (BLE) specification. The secondary RF link interface306may comprise a radio transmitter and receiver that operate at frequencies different from or using modulation different from the other interfaces. For example, the WLAN interface302may utilizes frequencies in the 2.4 GHz and 5 GHz Industrial Scientific and Medicine (ISM) bands, while the PAN interface304may utilize the 2.4 GHz ISM bands. The secondary RF link interface306may comprise a radio transmitter that operates in the 900 MHz ISM band, within a licensed band at another frequency, and so forth. The secondary RF link interface306may be utilized to provide backup communication between the robot104and other devices in the event that communication fails using one or more of the WLAN interface302or the PAN interface304. For example, in the event the robot104travels to an area within the physical environment that does not have Wi-Fi coverage, the robot104may use the secondary RF link interface306to communicate with another device such as a specialized access point, docking station148, or other robot104. The other 308 network interfaces may include other equipment to send or receive data using other wavelengths or phenomena. For example, the other 308 network interface may include an ultrasonic transceiver used to send data as ultrasonic sounds, a visible light system that communicates by modulating a visible light source such as a light-emitting diode, and so forth. In another example, the other 308 network interface may comprise a wireless wide area network (WWAN) interface or a wireless cellular data network interface. Continuing the example, the other 308 network interface may be compliant with at least a portion of the 3G, 4G, LTE, or other standards. The robot104may include one or more of the following sensors122. The sensors122depicted here are provided by way of illustration and not necessarily as a limitation. It is understood that other sensors122may be included or utilized by the robot104, while some sensors122may be omitted in some configurations. A motor encoder310provides information indicative of the rotation or linear extension of a motor138. The motor138may comprise a rotary motor, or a linear actuator. In some implementations, the motor encoder310may comprise a separate assembly such as a photodiode and encoder wheel that is affixed to the motor138. In other implementations, the motor encoder310may comprise circuitry configured to drive the motor138. For example, the autonomous navigation module130may utilize the data from the motor encoder310to estimate a distance traveled. A suspension weight sensor312provides information indicative of the weight of the robot104on the suspension system for one or more of the wheels802or the caster804. For example, the suspension weight sensor312may comprise a switch, strain gauge, load cell, photodetector, or other sensing element that is used to determine whether weight is applied to a particular wheel, or whether weight has been removed from the wheel. In some implementations, the suspension weight sensor312may provide binary data such as a “1” value indicating that there is a weight applied to the wheel, while a “0” value indicates that there is no weight applied to the wheel802. In other implementations, the suspension weight sensor312may provide an indication such as so many kilograms of force or newtons of force. The suspension weight sensor312may be affixed to one or more of the wheels802or the caster804. In some situations, the safety module214may use data from the suspension weight sensor312to determine whether or not to inhibit operation of one or more of the motors138. For example, if the suspension weight sensor312indicates no weight on the suspension, the implication is that the robot104is no longer resting on its wheels802, and thus operation of the motors138may be inhibited. In another example, if the suspension weight sensor312indicates weight that exceeds a threshold value, the implication is that something heavy is resting on the robot104and thus operation of the motors138may be inhibited. One or more bumper switches314provide an indication of physical contact between a bumper or other member that is in mechanical contact with the bumper switch314. The safety module214may utilize sensor data224obtained by the bumper switches314to modify the operation of the robot104. For example, if the bumper switch314associated with a front of the robot104is triggered, the safety module214may drive the robot104backwards. A floor optical motion sensor (FOMS)316provides information indicative of motions of the robot104relative to the floor or other surface underneath the robot104. In one implementation, the FOMS316may comprise a light source such as a light-emitting diode (LED), an array of photodiodes, and so forth. In some implementations, the FOMS316may utilize an optoelectronic sensor, such as a low resolution two-dimensional array of photodiodes. Several techniques may be used to determine changes in the data obtained by the photodiodes and translate this into data indicative of a direction of movement, velocity, acceleration, and so forth. In some implementations, the FOMS316may provide other information, such as data indicative of a pattern present on the floor, composition of the floor, color of the floor, and so forth. For example, the FOMS316may utilize an optoelectronic sensor that may detect different colors or shades of gray, and this data may be used to generate floor characterization data. An ultrasonic sensor318may utilize sounds in excess of 20 kHz to determine a distance from the sensor to an object. The ultrasonic sensor318may comprise an emitter such as a piezoelectric transducer and a detector such as an ultrasonic microphone. The emitter may generate specifically timed pulses of ultrasonic sound while the detector listens for an echo of that sound being reflected from an object within the field of view. The ultrasonic sensor318may provide information indicative of a presence of an object, distance to the object, and so forth. Two or more ultrasonic sensors318be utilized in conjunction with one another to determine a location within a two-dimensional plane of the object. In some implementations, the ultrasonic sensor318or portion thereof may be used to provide other functionality. For example, the emitter of the ultrasonic sensor318may be used to transmit data and the detector may be used to receive data transmitted that is ultrasonic sound. In another example, the emitter of an ultrasonic sensor318may be set to a particular frequency and used to generate a particular waveform such as a sawtooth pattern to provide a signal that is audible to an animal, such as a dog or a cat. An optical sensor320may provide sensor data224indicative of one or more of a presence or absence of an object, a distance to the object, or characteristics of the object. The optical sensor320may use time-of-flight (ToF), structured light, interferometry, or other techniques to generate the distance data. For example, ToF determines a propagation time (or “round-trip” time) of a pulse of emitted light from an optical emitter or illuminator that is reflected or otherwise returned to an optical detector. By dividing the propagation time in half and multiplying the result by the speed of light in air, the distance to an object may be determined. The optical sensor320may utilize one or more sensing elements. For example, the optical sensor320may comprise a 4×4 array of light sensing elements. Each individual sensing element may be associated with a field of view (FOV) that is directed in a different way. For example, the optical sensor320may have four light sensing elements, each associated with a different 10° FOV, allowing the sensor to have an overall FOV of 40°. In another implementation, a structured light pattern may be provided by the optical emitter. A portion of the structured light pattern may then be detected on the object using a sensor122such as an image sensor or camera344. Based on an apparent distance between the features of the structured light pattern, the distance to the object may be calculated. Other techniques may also be used to determine distance to the object. In another example, the color of the reflected light may be used to characterize the object, such as whether the object is skin, clothing, flooring, upholstery, and so forth. In some implementations, the optical sensor320may operate as a depth camera, providing a two-dimensional image of a scene, as well as data that indicates a distance to each pixel. Data from the optical sensors320may be utilized for collision avoidance. For example, the safety module214and the autonomous navigation module130may utilize the sensor data224indicative of the distance to an object in order to prevent a collision with that object. Multiple optical sensors320may be operated such that their FOV overlap at least partially. To minimize or eliminate interference, the optical sensors320may selectively control one or more of the timing, modulation, or frequency of the light emitted. For example, a first optical sensor320may emit light modulated at 30 kHz while a second optical sensor320emits light modulated at 33 kHz. A lidar322sensor provides information indicative of a distance to an object or portion thereof by utilizing laser light. The laser is scanned across a scene at various points, emitting pulses which may be reflected by objects within the scene. Based on the time-of-flight distance to that particular point, sensor data224may be generated that is indicative of the presence of objects and the relative positions, shapes, and so forth are visible to the lidar322. Data from the lidar322may be used by various modules. For example, the autonomous navigation module130may utilize point cloud data generated by the lidar322for localization of the robot104within the physical environment. A mast position sensor324provides information indicative of a position of the mast. For example, the mast position sensor324may comprise limit switches associated with the mast extension mechanism that indicate whether the mast914is an extended or retracted position. In other implementations, the mast position sensor324may comprise an optical code on at least a portion of the mast914that is then interrogated by an optical emitter and a photodetector to determine the distance to which the mast914is extended. In another implementation, the mast position sensor324may comprise an encoder wheel that is attached to a mast motor that is used to raise or lower the mast914. The mast position sensor324may provide data to the safety module214. For example, if the robot104is preparing to deploy the carrying handle, data from the mast position sensor324may be checked to determine if the mast914is retracted, and if not, the mast914may be retracted prior to deployment of the carrying handle. By retracting the mast914before the carrying handle is deployed, injury to the user102as well as damage to the mast914is avoided as the user102bends down to grasp the carrying handle. A mast strain sensor326provides information indicative of a strain on the mast with respect to the remainder of the robot104. For example, the mast strain sensor326may comprise a strain gauge or load cell that measures a side-load applied to the mast914, a weight on the mast914, or downward pressure on the mast914. The safety module214may utilize sensor data224obtained by the mast strain sensor326. For example, if the strain applied to the mast914exceeds a threshold amount, the safety module214may direct an audible and visible alarm to be presented by the robot104. A payload weight sensor328provides information indicative of the weight associated with the modular payload bay912. The payload weight sensor328may comprise one or more sensing mechanisms to determine the weight of a load. These sensing mechanisms may include piezoresistive devices, piezoelectric devices, capacitive devices, electromagnetic devices, optical devices, potentiometric devices, microelectromechanical devices, and so forth. The sensing mechanisms may operate as transducers that generate one or more signals based on an applied force, such as that of the load due to gravity. For example, the payload weight sensor328may comprise a load cell having a strain gauge and a structural member that deforms slightly when weight is applied. By measuring a change in the electrical characteristic of the strain gauge, such as capacitance or resistance, the weight may be determined. In another example, the payload weight sensor328may comprise a force sensing resistor (FSR). The FSR may comprise a resilient material that changes one or more electrical characteristics when compressed. For example, the electrical resistance of a particular portion of the FSR may decrease as the particular portion is compressed. In some implementations, the safety module214may utilize the payload weight sensor328to determine if the modular payload bay912has been overloaded. If so, an alert or notification may be issued. One or more device temperature sensors330may be utilized by the robot104. The device temperature sensors330provide temperature data of one or more components within the robot104. For example, a device temperature sensor330may indicate a temperature of one or more of the batteries106, one or more motors138, and so forth. In the event the temperature exceeds a threshold value, the component associated with that device temperature sensor330may be shut down. One or more interlock sensors332may provide data to the safety module214or other circuitry that prevents the robot104from operating in an unsafe condition. For example, the interlock sensors332may comprise switches that indicate whether an access panel is open, if the carrying handle is deployed, and so forth. The interlock sensors332may be configured to inhibit operation of the robot104until the interlock switch indicates a safe condition is present. A gyroscope334may provide information indicative of rotation of an object affixed thereto. For example, the gyroscope334may generate sensor data224that is indicative of a change in orientation of the robot104or portion thereof. An accelerometer336provides information indicative of a direction and magnitude of an imposed acceleration. Data such as rate of change, determination of changes in direction, speed, and so forth may be determined using the accelerometer336. The accelerometer336may comprise mechanical, optical, micro-electromechanical, or other devices. For example, the gyroscope334in the accelerometer336may comprise a prepackaged solid-state inertial measurement unit (IMU) that provides multiple axis gyroscopes334and accelerometers336. A magnetometer338may be used to determine an orientation by measuring ambient magnetic fields, such as the terrestrial magnetic field. For example, the magnetometer338may comprise a Hall effect transistor that provides output compass data indicative of a magnetic heading. The robot104may include one or more locations sensors340. The location sensors340may comprise an optical, radio, or other navigational system such as a global positioning system (GPS) receiver. For indoor operation, the location sensors340may comprise indoor position systems, such as using Wi-Fi Positioning Systems (WPS). The location sensors340may provide information indicative of a relative location, such as “living room” or an absolute location such as particular coordinates indicative of latitude and longitude, or displacement with respect to a predefined origin. A photodetector342provide sensor data224indicative of impinging light. For example, the photodetector342may provide data indicative of a color, intensity, duration, and so forth. A camera344generates sensor data224indicative of one or more images. The camera344may be configured to detect light in one or more wavelengths including, but not limited to, terahertz, infrared, visible, ultraviolet, and so forth. For example, an infrared camera344may be sensitive to wavelengths between approximately 700 nanometers and 1 millimeter. The camera344may comprise charge coupled devices (CCD), complementary metal oxide semiconductor (CMOS) devices, microbolometers, and so forth. The robot104may use image data acquired by the camera344for object recognition, navigation, collision avoidance, user communication, and so forth. For example, a pair of cameras344sensitive to infrared light may be mounted on the front of the robot104to provide binocular stereo vision, with the sensor data224comprising images being sent to the autonomous navigation module130. In another example, the camera344may comprise a 10 megapixel or greater camera that is used for videoconferencing or for acquiring pictures for the user102. The camera344may include a global shutter or a rolling shutter. The shutter may be mechanical or electronic. A mechanical shutter uses a physical device such as a shutter vane or liquid crystal to prevent light from reaching a light sensor. In comparison, an electronic shutter comprises a specific technique of how the light sensor is read out, such as progressive rows, interlaced rows, and so forth. With a rolling shutter, not all pixels are exposed at the same time. For example, with an electronic rolling shutter, rows of the light sensor may be read progressively, such that the first row on the sensor was taken at a first time while the last row was taken at a later time. As a result, a rolling shutter may produce various image artifacts, especially with regard to images in which objects are moving. In contrast, with a global shutter the light sensor is exposed all at a single time, and subsequently read out. In some implementations, the camera(s)344, particularly those associated with navigation or autonomous operation, may utilize a global shutter. In other implementations, the camera(s)344providing images for use by the autonomous navigation module130may be acquired using a rolling shutter and subsequently may be processed to mitigate image artifacts. One or more microphones346may be configured to acquire information indicative of sound present in the physical environment. In some implementations, arrays of microphones346may be used. These arrays may implement beamforming techniques to provide for directionality of gain. The robot104may use the one or more microphones346to acquire information from acoustic tags, accept voice input from users102, determine ambient noise level, for voice communication with another user or system, and so forth. An air pressure sensor348may provide information indicative of an ambient atmospheric pressure or changes in ambient atmospheric pressure. For example, the air pressure sensor348may provide information indicative of changes in air pressure due to opening and closing of doors, weather events, and so forth. An air quality sensor350may provide information indicative of one or more attributes of the ambient atmosphere. For example, the air quality sensor350may include one or more chemical sensing elements to detect the presence of carbon monoxide, carbon dioxide, ozone, and so forth. In another example, the air quality sensor350may comprise one or more elements to detect particulate matter in the air, such as a photoelectric detector, ionization chamber, and so forth. In another example, the air quality sensor350may include a hygrometer that provides information indicative of relative humidity. An ambient light sensor352may comprise one or more photodetectors342or other light-sensitive elements that are used to determine one or more of the color, intensity, or duration of ambient lighting around the robot104. An ambient temperature sensor354provides information indicative of the temperature of the ambient environment proximate to the robot104. In some implementations, an infrared temperature sensor may be utilized to determine the temperature of another object at a distance. A floor analysis sensor356may include one or more components that are used to generate at least a portion of the floor characterization data. In one implementation, the floor analysis sensor356may comprise circuitry that may be used to determine one or more of the electrical resistance, electrical inductance, or electrical capacitance of the floor. For example, two or more of the wheels802in contact with the floor may include an allegedly conductive pathway between the circuitry and the floor. By using two or more of these wheels802, the circuitry may measure one or more of the electrical properties of the floor. Information obtained by the floor analysis sensor356may be used by one or more of the safety module214, the autonomous navigation module130, the task module128, and so forth. For example, if the floor analysis sensor356determines that the floor is wet, the safety module214may decrease the speed of the robot104and generate a notification alerting the user102. The floor analysis sensor356may include other components as well. For example, a coefficient of friction sensor may comprise a probe that comes into contact with the surface and determines the coefficient of friction between the probe and the floor. A wheel rotation sensor358provides output that is indicative of rotation of a wheel, such as a drive wheel or caster wheel. For example, the wheel may include a magnet, while the wheel rotation sensor358comprises a Hall sensor or reed switch that is able to detect the magnetic field of the magnet. As the wheel rotates, the sensor detects the magnet moving past. Based on when the magnet is detected, a determination may be made about one or more of a rotation rate of the wheel, whether the wheel is no longer rotating, and so forth. For example, if the wheel seizes and no longer rotates about its axle, there is the potential for the robot104to drag the wheel along the floor, potentially damaging the floor, the wheel, or both. If the wheel rotation sensor358detects that the wheel is not rotating about its axle while the motors138are engaged to move the robot104, the status signal134may be produced that indicates a stop condition. As a result, the rapid braking system may operate to stop the robot104. In another implementation, the wheel rotation sensor358provides data indicative of one or more of a direction of orientation, angular velocity, linear speed of the wheel, and so forth. For example, if the wheel is a caster, the wheel rotation sensor358may comprise an optical encoder and corresponding target that is able to determine that the caster804transitioned from an angle of 0° at a first time to 49° at a second time. The sensors122may include a radar360. The radar360may be used to provide information as to a distance, lateral position, and so forth, to an object. The sensors122may include a passive infrared (PIR) sensor362. The PIR sensor362may be used to detect the presence of people, pets, hotspots, and so forth. For example, the PIR sensor362may be configured to detect infrared radiation with wavelengths between 8 and 14 micrometers. The robot104may include other sensors364as well. For example, a capacitive proximity sensor may be used to provide proximity data to adjacent objects. In another example, the other sensors364may include humidity sensors to determine the humidity of the ambient air. Other sensors364may include radio frequency identification (RFID) readers, near field communication (NFC) systems, coded aperture cameras, and so forth. For example, NFC tags may be placed at various points within the physical environment to provide landmarks for the autonomous navigation module130. One or more touch sensors may be utilized to determine contact with a user102or other object. The robot104may include one or more output devices124. The motor138may be used to provide linear or rotary motion. A light382may be used to emit photons. A speaker384may be used to emit sound. A display386may comprise one or more of a liquid crystal display, light emitting diode display, electrophoretic display, cholesterol display, interferometric display, and so forth. The display386may be used to present visible information such as graphics, pictures, text, and so forth. In some implementations, the display386may comprise a touchscreen that combines a touch sensor and a display386. In some implementations, the robot104may be equipped with a projector388. The projector388may be able to project an image on a surface such as the floor, wall, ceiling, and so forth. A scent dispenser390may be used to emit one or more smells. For example, the scent dispenser390may comprise a plurality of different scented liquids that may be evaporated or vaporized in a controlled fashion to release predetermined amounts of each. A handle release392may comprise an electrically operated mechanism such as one or more of a motor, solenoid, piezoelectric material, electroactive polymer, or shape-memory alloy. In one implementation, the handle release392may release a latch that allows a spring to push the carrying handle into the deployed position. In another implementation, the electrically operated mechanism may provide a force that deploys the carrying handle. Retraction of the carrying handle may be manual or electronically activated. In other implementations, other 394 output devices may be utilized. For example, the robot104may include a haptic output device that provides output that produces particular touch sensations to the user102. Continuing the example, a motor138with an eccentric weight may be used to create a buzz or vibration to allow the robot104to simulate the purr of a cat. FIG.4is a flow diagram400of a process to rapidly brake the robot104, according to some implementations. The process may be implemented at least in part by the circuitry described inFIGS.5-7. The process is described with respect to a single motor138for ease of discussion, and not necessarily as a limitation. For example, the process may be used with two or more motors138. At402a stop condition is determined. For example, the stop condition may result from one or more of a failure of one or more components of the device, expected collision of the device with an object, collision of the device with an object, receipt of a command to stop movement of the motor138, and so forth. As a result of the stop condition, a status signal134may be produced. For example, the stop condition may be indicated by the status signal134transitioning from a high value (having a voltage above a threshold value) to a low value (in which the voltage is below the threshold value). The status signals134from a plurality of components, subsystems, and so forth, may be used as input to a multiple-input AND gate. If any one of the status signals134is in the low state, the output from the AND gate is also low, producing a stop signal. At404, the motor cutoff circuit132is operated to disconnect the motor138from the power supply. For example, responsive to the stop signal, the motor cutoff circuit132may disconnect a first terminal of the motor138from a first terminal of the battery106. The motor cutoff circuit132is described in more detail with regard toFIG.5. At406the braking circuit140is operated to dissipate power produced by motion of the motor138. The power may be dissipated at a predetermined rate. For example, the braking circuitry140may use a current regulator to control the current delivered to a resistor, resulting in a dissipation of power at the predetermined rate. At408, responsive to back EMF produced by the motor138being below a threshold value, operate the stop circuit142. The stop circuit142produces a short between the terminals (windings) of the motor138. The stop circuit142is described in more detail with regard toFIG.7. At410a start condition is determined. For example, the start condition may be indicated by the status signal134transitioning the stop signal from the low value to the high value, indicative of the start condition. At412the motor cutoff circuit132operates, responsive to the signal indicative of the start condition, reconnecting the motor138to the power supply. At414, the stop circuit142operates to discontinue the short between the terminals of the motor138. For example, a coil in a relay in the stop circuit142may be energized once power is applied to the motor138. Once energized, the connection between the terminals of the motor138is broken. At416the braking circuit140operates to discontinue dissipation of power. For example, as the stop signal goes to the high state, the current regulator stops transferring power between the motor138and the resistor. At418, normal operation resumes. The motors138may operate as driven by the motor control circuit136. The operations at412-416may occur at substantially the same time. For example, the circuitry described in412-416may operate in parallel to one another. FIG.5is a schematic of a motor cutoff circuit132of the rapid braking system, according to some implementations. Specific component models, values, and so forth as indicated by this schematic, are described for illustration, and not necessarily as a limitation. As described above, a source such as components or subsystems provides status signals134to an input of the motor cutoff circuit132. These status signals134may represent a normal (non-fault) condition such as a “high” voltage (above a threshold voltage) while a stop (fault) condition is represented as a “low” voltage (below the threshold voltage). By utilizing the signaling, a power loss in the source will result in the presentation of a stop (fault) condition to the motor cutoff circuit132. Sources for the status signals134may include, but are not limited to, one or more parts of the motor control circuit136, the application subsystem108, the mast subsystem110, the mobility subsystem112, the drive subsystem114, and so forth. The status signals134are provided as inputs to a multiple-input AND gate, or combination of AND gates. In this illustration, a pair of the multiple-input AND gates U1and U2that can each handle three inputs are used in conjunction with one another to handle the five status signals134. In other implementations, other configurations of AND gates may be used. The output of the multiple-input AND gate(s) is labeled in this schematic as “VM_EN”. When all of the inputs to the multiple-input AND gate(s) are high, the output is high. As a result, in the event any one of the status signals134indicates a stop condition, the voltage on the VM_EN line will drop from the high to the low state. In this illustration, the integrated circuit U3comprises a Texas Instruments, Inc. LM5069MM-1 positive high-voltage host swap and in-rush current controller with power limiting. The VM_EN line connects to a UVLO pin of an integrated circuit U3. A VIN pin of U3is connected to a positive terminal of the battery106that is labeled “VDD_VSYS_14P4” in this schematic. The motor cutoff circuit132includes a first N-channel field effect transistor (Q1) and a second N-channel FET (Q2). The Q1has a first source terminal, a first drain terminal connected to the positive terminal of the battery106, and a first gate terminal connected to the GATE pin of U3. The Q2has a second source terminal that is connected to the first source terminal of the Q1, a second drain terminal connected to the VDD_VM line, and a second gate terminal that is connected to the GATE pin of U3. The OUT pin of U3is connected to the first source terminal and the second source terminal. As illustrated here, the resistor R12on the PWR pin of U3limits the power dissipation of Q1and Q2to approximately 45 W total during the power on inrush. When the VM_EN line is high, indicative of normal operation, the U3provides sufficient voltage at the GATE pin to place Q1and Q2into a conductive state, allowing current to pass from the positive terminal of the battery106to the VDD_VM line. The VDD_VM line, in turn, is connected to a positive terminal of the motor138, or the motor control circuit136that in turn is connected to the positive terminal of the motor138. In the event that the VM_EN line goes low, U3discontinues the voltage at the GATE pin, placing both Q1and Q2into a non-conductive state, preventing current flow between the battery106and the motor138. The operation of the motor cutoff circuit132is failsafe. A single status signal134indicative of a stop condition results in the motor138being disconnected from the battery106. Likewise, a power failure to the motor cutoff circuit132also results in the motor138being disconnected from the battery106. As illustrated, the motor cutoff circuit132may omit the use of any reprogrammable devices, further simplifying operation. FIG.6Ais a schematic of a braking circuit140of the rapid braking system, according to some implementations. Specific component models, values, and so forth as indicated by this schematic, are described for illustration, and not necessarily as a limitation. The braking circuit140accepts as input the signal provided by the VM_EN line. The VM_EN line is high during normal operation and low when a stop condition has been determined. The VM_EN line is connected to resistor R14that in turn connects to ground. An N-channel field effect transistor Q3has a first source terminal connected to the ground, a first drain terminal, and a first gate terminal that is connected to the VM_EN line. The first drain terminal of Q3is connected to resistor R13, which in turn is connected to the positive terminal of the battery106. The first drain terminal of Q3is also connected to resistor R15, which in turn is connected to the ground. The first drain terminal of Q3is connected to a second gate terminal of an N-channel FET Q4. A second source terminal of Q4is connected to the ground. Q3inverts the signal present at VM_EN to control Q4. For example, when the VM_EN is high, the output from the drain of Q4is low, placing Q4into a non-conducting state. The braking circuit140includes one or more current regulators, such as U4, U5, and U6as illustrated. In this illustration, the current regulators U4, U5, and U6may comprise a Texas Instruments Inc. LM317KTTR 3-terminal adjustable regulator. The number of current regulators may be determined based on the amount of power that is to be dissipated during braking, and the rate of that dissipation. A second drain terminal of Q4is connected via the CURRENT_SINK_EN line to a resistor, such as R20, that in turn is connected to an output voltage adjustment terminal such as an ADJUST pin of U6and, via another resistor R18, to an OUTPUT pin. An INPUT pin of U6is connected to the motor138, such as the VDD_VM line. With this particular integrated circuit, the inclusion of the resistor R18between the ADJUST and OUTPUT pin result in U6operating as a precision current regulator. When the VM_EN signal is low, the CURRENT_SINK_EN line is connected to ground, and power from the VDD_VM line is dissipated. For example, the current regulator U6controls the current transfer from VDD_VM to2A, thus controlling the rate of power dissipation. By controlling the rate of current transfer by the current regulator, the rate at which the robot104decelerates may be selected. The other current regulators U4and U5may be similarly configured. To dissipate further power, additional current regulators and their associated circuitry may be used. For example, as depicted here, the three current regulators U4, U5, and U6are used. FIG.6Bis a schematic of a braking clamp circuit602of the braking circuit140, according to some implementations. Specific component models, values, and so forth as indicated by this schematic, are described for illustration, and not necessarily as a limitation. In some implementations, abrupt disconnect of the motor138from the battery106may result in a voltage that could cause damage to one or more of the components in the robot104. The braking clamp circuit602may be configured to determine if a voltage produced by the motor138exceeds a threshold value, and if so, directly connects the motor138to a resistive load, clamping this spiking voltage. The braking clamp circuit602may operate in conjunction with the current regulators U4, U5, and U6described above. When the voltage produced by the motor138drops below the threshold value, the braking clamp circuit602may disconnect the resistive load, with the remaining energy dissipated through the current regulators U4, U5, and U6as described above. In this illustration, a comparator U8is used to determine if the voltage on the VDD_VM line is greater than a threshold value, such as 24V. If so, the comparator U8produces a high value that is provided to an input pin of U9a single-channel high-speed gate driver, such as the UCC27537DBVT from Texas Instruments, Inc. The output from U9is connected to a gate terminal of FET Q5. When the output from U9is high, Q5is transitioned to a conducting state. The source terminal of Q5is connected to the ground while the source terminal of Q5is connected to the VDD_VM line via a sink resistor. This sink resistor exhibits relatively low resistance, such as 10 ohms, and a high power dissipation rating, such as 50 W. With this circuit, if the VDD_VM exceeds the threshold value, the sink resistor provides the VDD_VM with a path to ground, dissipating energy from the motor138in the sink resistor. When the voltage on the VDD_VM line is below the threshold, Q5is transitioned to a non-conductive state, stopping current flow through the sink resistor. Meanwhile, the current regulators U4, U5, AND U6as described above may be operating and continue operating to dissipate power produced by the motor138. FIG.7is a schematic700of a stop circuit142of the rapid braking system, according to some implementations. Specific component models, values, and so forth as indicated by this schematic, are described for illustration, and not necessarily as a limitation. The VDD_VM line that is connected to a terminal of the motor138provides input to the stop circuit142. The VDD_VM line provides input to a voltage divider comprising R34and R35. Output from the voltage divider is provided to an ENABLE terminal on a load switch U10. The ENABLE terminal is used to control operation of the load switch U10. For example, a logic high signal may be applied to the ENABLE terminal to activate the load switch U10. In one implementation, the load switch U10may comprise a slew rate controlled load switch such as a SIP32461DB-T2-GE1 from Vishay Intertechnology, Inc. An INPUT pin of U10is connected to the power source, such as a low voltage line VDD_3V3_FS. A GROUND pin of U10is connected to ground. An OUT pin of U10is connected to a positive terminal of a coil in relay K1, while the negative terminal of the coil is connected to ground. K1may comprise a TX2SA-3V-Z relay from Panasonic Electric Works Co., Ltd. During normal operation, when VDD_VM is high, U10provides a voltage to the OUTPUT pin that in turn energizes the coil of the relay K1. When the coil is energized, the circuit between a first contact and a second contact of the relay is open, and no current flows. The first contact of the relay K1is connected the ground via a fuse or other overcurrent protection device. For example, the fuse may comprise a NANOSMDC075F-2 by Littelfuse, Inc. The second contact is connected to one or more terminals or windings of the motor138. When the coil is deenergized, a connection is made between the first contact and the second contact, shorting the terminals of the motor138. The short across the windings of the motor138may retard further rotation of the shaft of the motor138. For example, with a BLDC motor, a short between the terminals results in the motor's138shaft being resistant to rotation. The presence of power at VDD_VM, such as during normal operation or while the motor138is producing power, results in the coil of the relay being energized, and no connection between the first contact and the second contact. The motor138may thus be driven by the motor control circuit136or may be slowed by the action of the braking circuit140. When the voltage at VDD_VM drops below the threshold value, such as when the motor cutoff circuit132has disconnected the battery106from the motor138, and when the motor138is producing a voltage that is less than the threshold value, the coil de-energizes and the motor138windings are shorted, producing the braking effect to resist further movement of the robot104. When power is restored, such as when the status signals134indicate a normal condition and the motor cutoff circuit132restores power to the motor138, the coil is again energized, the connection between the first and second coil contacts is broken, and the short is removed, allowing the motor138to operate. The type and quantity of relays used may be based on the number and construction of motors138protected. In this illustration the robot104may have two motors138for motive power, a left motor and a right motor. Each motor may comprise a motor having one or more phases. For example, the motor may be a three-phase brushless DC motor. A winding from each phase may then be routed through a relay as shown. As a result, when the OUTPUT from U10is low, the coils for all relays may be deenergized, resulting in all of the windings for all of the motors138being shorted. This configuration of stop circuit142allows for failsafe operation. When the voltage on the VDD_VM line drops, the windings of the motor138are shorted, inhibiting further rotation of the motors138. If an external force, such as the user102, were to push the robot104and cause the motor138to turn, the fuses would open, breaking the short, preventing damage to the components of the robot104. FIG.8is a front view800of the robot104, according to some implementations. In this view, wheels802are depicted on the left and right sides of a lower structure. As illustrated here, the wheels802are canted inwards towards an upper structure. In other implementations, the wheels802may be mounted vertically. The caster804is visible along the midline. The front section of the robot104includes a variety of sensors122. A first pair of optical sensors320are located along the lower edge of the front while a second pair of optical sensors320are located along an upper portion of the front. Between the second set of the optical sensors320is a microphone346(array). The wheels802are drive wheels, driven by one or more motors138. For example, a left motor138may drive the left wheel802and a right motor138may drive the right wheel802. In some implementations, one or more microphones346may be arranged within or proximate to the display386. For example, a microphone346array may be arranged within the bezel of the display386. A pair of cameras344separated by a distance are mounted to the front of the robot104and provide for stereo vision. The distance or “baseline” between the pair of cameras344may be between 5 and 15 centimeters (cm). For example, the pair of cameras344may have a baseline of 10 cm. In some implementations, these cameras344may exhibit a relatively wide horizontal field-of-view (HFOV). For example, the HFOV may be between 90° and 110°. A relatively wide FOV allows for easier detection of moving objects, such as users102or pets that may be in the path of the robot104. Also, the relatively wide FOV facilitates the robot104being able to detect objects when turning. The sensor data224comprising images produced by this pair of cameras344can be used by the autonomous navigation module130for navigation of the robot104. The cameras344used for navigation may be of different resolution from, or sensitive to different wavelengths than, cameras344used for other purposes such as video communication. For example, the navigation cameras344may be sensitive to infrared light allowing the robot104to operate in darkness, while the camera344mounted above the display386may be sensitive to visible light and is used to generate images suitable for viewing by a person. Continuing the example, the navigation cameras344may have a resolution of at least 300 kilopixels each while the camera344mounted above the display386may have a resolution of at least 10 megapixels. In other implementations, navigation may utilize a single camera344. In this illustration, the display386is depicted with cameras344arranged above the display386. The cameras344may operate to provide stereoimages of the physical environment, the user102, and so forth. For example, an image from each of the cameras344above the display386may be accessed and used to generate stereo image data about a face of a user102. This stereoimage data may then be used for facial recognition, user identification, gesture recognition, gaze tracking, and so forth. In other implementations, a single camera344may be present above the display386. The display386may be mounted on a movable mount. The movable mount may allow the display386to move along one or more degrees of freedom. For example, the display386may tilt, rotate as depicted here, and so forth. The size the display386may vary. In one implementation, the display386may be approximately 8 inches as measured diagonally from one corner to another. An ultrasonic sensor318is also mounted on the front of the robot104and may be used to provide sensor data224that is indicative of objects in front of the robot104. One or more speakers384may be mounted on the robot104. For example, pyramid range speakers384are mounted on the front of the robot104as well as a high range speaker384such as a tweeter. The speakers384may be used to provide audible output such as alerts, music, human speech such as during a communication session with another user102, and so forth. One or more bumper switches314(not shown) may be present along the front of the robot104. For example, a portion of the housing of the robot104that is at the leading edge may be mechanically coupled to one or more bumper switches314. Other output devices124, such as one or more lights382, may be on an exterior of the robot104. For example, a running light may be arranged on a front of the robot104. The running light may provide light for operation of one or more of the cameras344, a visible indicator to the user102that the robot104is in operation, and so forth. One or more of the FOMS316are located on an underside of the robot104. FIG.9is a side view900of the robot104, according to some implementations. The exterior surfaces of the robot104may be designed to minimize injury in the event of an unintended contact between the robot104and a user102or a pet. For example, the various surfaces may be angled, rounded, or otherwise designed to divert or deflect an impact. In some implementations, the housing of the robot104, or a surface coating may comprise an elastomeric material or a pneumatic element. For example, the outer surface of the housing of the robot104may be coated with a viscoelastic foam. In another example, the outer surface of the housing of the robot104may comprise a shape-memory polymer that upon impact forms but then over time returns to the original shape. In this side view, the left side of the robot104is depicted. An ultrasonic sensor318and an optical sensor320are present on either side of the robot104. The placement of the heavier components of the robot104may be arranged such that a center of gravity (COG)902is located between a wheel axle904of the front wheels802and the caster804. Such placement of the COG902may result in improved stability of the robot104and may also facilitate lifting by the carrying handle. As described above, a wheel rotation sensor358may be used to determine if the caster804wheel or other wheel802is rotating about its axle. In this illustration, the caster804is shown in a trailing configuration, in which the caster804is located behind or aft of the wheel axle904and the COG902. In another implementation (not shown) the caster804may be in front of the axle of the wheels802. For example, the caster804may be a leading caster804positioned forward of the COG902. The robot104may encounter a variety of different floor surfaces and transitions between different floor surfaces during the course of operation. The robot104may include a contoured underbody906that transitions from a first height908at the front of the robot104to a second height910that is proximate to the caster804. This contour provides a ramp effect such that if the robot104encounters an obstacle that is below the first height908, the contoured underbody906helps direct the robot104over the obstacle without lifting the driving wheels802clear from the floor. As a result, the robot104is better able to drive over small obstacles. The robot104may include a modular payload bay912located within the lower structure. The modular payload bay912provides one or more of mechanical or electrical connectivity with the robot104. For example, the modular payload bay912may include one or more engagement features such as slots, cams, ridges, magnets, bolts, and so forth that are used to mechanically secure an accessory within the modular payload bay912. In one implementation, the modular payload bay912may comprise walls within which the accessory may sit. In another implementation, the modular payload bay912may include other mechanical engagement features such as slots into which the accessory may be slid and engage. The modular payload bay912may include one or more electrical connections. For example, the electrical connections may comprise a universal serial bus (USB) connection that allows for the transfer of data, electrical power, and so forth between the robot104and the accessory. In some implementations the accessory may include one or more I/O devices120. The robot104may incorporate a display386that may be utilized to present visual information to the user102. In some implementations, the display386may be located with or affixed to the upper structure. In some implementations, the display386may comprise a touch screen that allows user input to be acquired. The display386may be mounted on a movable mount that allows motion along one or more axes. For example, the movable mount may allow the display386to be tilted, rotated, and so forth. The display386may be moved to provide a desired viewing angle to the user102, to provide output from the robot104, and so forth. For example, the output may comprise the display386being tilted forward and backward to provide a gestural output equivalent to a human nodding their head. The robot104may incorporate a mast914. The mast914provides a location from which additional sensors122or output devices124may be placed at a higher vantage point. The mast914may be fixed or extensible. The extensible mast914is depicted in this illustration. The extensible mast914may be transitioned between a retracted state, an extended state or placed at some intermediate value between the two. At the top of the mast914may be a mast housing916. In this illustration, the mast housing916is approximately spherical, however in other implementations other physical form factors such as cylinders, squares, or other shapes may be utilized. The mast housing916may contain one or more sensors122. For example, the sensors122may include a camera344having a field-of-view (FOV). In another example, the sensors122may include an optical sensor320to determine a distance to an object. The optical sensor320may look upward, and may provide information as to whether there is sufficient clearance above the robot104to deploy the mast914. In another example, the mast housing916may include one or more microphones346. One or more output devices124may also be contained by the mast housing916. For example, the output devices124may include a camera flash used to provide illumination for the camera344, and indicator light that provides information indicative of a particular operation of the robot104, and so forth. Other output devices124, such as one or more lights382, may be elsewhere on an exterior of the robot104. For example, a light382may be arranged on a side of the upper structure. In some implementations, one or more of the sensors122, output devices124, or the mast housing916may be movable. For example, the motor138may allow for the mast914, the mast housing916, or a combination thereof to be rotated allowing the FOV to be panned from left to right. The mast914may be configured with one or more safety features. For example, a portion of the mast914at the base of the mast914may be configured to deform or break in the event that a load exceeding a threshold amount is applied to the mast914. In another implementation, the mounting point for the mast914to the upper structure includes one or more breakaway elements, allowing the mast914to break away from the upper structure in the event that a load exceeds the threshold amount. In yet another implementation, the mast914may comprise a flexible structure that bends when a load exceeding a threshold amount is applied to the mast914. In some implementations, the display386may be mounted to the mast914. For example, the display386may be incorporated into the mast housing916. In another example, the display386may be mounted to a portion of the mast914, and so forth. The robot104may occasionally need to be manually transported from one location to another. For example, the robot104may be unable to climb stairs, enter a vehicle, and so forth. To facilitate manual transportation, the robot104may include a carrying handle918. The carrying handle918may be retractable such that, when not in use, the carrying handle918is not accessible. The carrying handle918may retract and deploy via translation, rotation, or extension. For example, the carrying handle918may slide out from the upper structure, or may rotate about a pivot point at one end. In the event the robot104is to be transported, the carrying handle918may be deployed. The carrying handle918may be positioned such that at least a portion of the carrying handle918is over the COG902of the robot104. In another implementation, the carrying handle918may deploy upwards from the lower structure. Deployment of the carrying handle918may include manual operation, such as the user102pressing a handle release button, or may be electronically activated by the robot104using an electrically operated mechanism. For example, the electronic activation may involve the robot104generating a command that activates an electrically operated mechanism such as one or more of a motor, solenoid, piezoelectric material, electroactive polymer, shape-memory alloy, and so forth that releases a latch and allows a spring to push the carrying handle918into the deployed position. Continuing the example, the user102may utter a command such as “robot, deploy carrying handle”. Automated speech recognition systems may be used to recognize the utterance and as a result the electrically operated mechanism is activated. In another implementation, the electrically operated mechanism may provide the force that deploys the carrying handle918. Retraction of the carrying handle918may be manual or electronically activated. By utilizing a retractable carrying handle918that may be stowed when not in use, safety of the robot104is improved. For example, the retracted carrying handle918is no longer exposed to be caught on some other object. Additionally, safety may be further improved by including a safety interlock associated with the carrying handle918. The safety interlock may be based on data such as information indicative of the deployment of the carrying handle918, or may be based at least in part on sensor data224. For example, a switch may be used to indicate whether the carrying handle918has been stowed. While the carrying handle918is extended, operation of one or more motors138in the robot104may be inhibited or otherwise prevented from being operated. For example, while the carrying handle918is extended, the motors138used to drive the wheels802may be rendered inoperable such that the robot104may not move. In another example, while the carrying handle918is extended, the mast914may be placed into a retracted position and remain there until the carrying handle918has been stowed. By limiting the motion of the robot104while the carrying handle918is extended for use, the possibility for an adverse interaction between the robot104and the user102is reduced. FIG.10is a back view1000of the robot104, according to some implementations. In this view, as with the front, a first pair of optical sensors320are located along the lower edge of the rear of the robot104, while a second pair of optical sensors320are located along an upper portion of the rear of the robot104. An ultrasonic sensor318provides proximity detection for objects that are behind the robot104. Robot charging contacts1002may be provided on the rear of the robot104. The robot charging contacts1002may comprise electrically conductive components that may be used to provide power from an external source such as a docking station148to the robot104. In other implementations, wireless charging may be utilized. For example, wireless inductive or wireless capacitive charging techniques may be used to provide electrical power to the robot104. In some implementations the wheels802may be electrically conductive wheels1004, that provide an electrical conductive pathway between the robot104and the floor. One or more robot data contacts1006may be arranged along the back of the robot104. The robot data contacts1006may be configured to establish contact with corresponding base data contacts within the docking station148. The robot data contacts1006may provide optical, electrical, or other connections suitable for the transfer of data. Other output devices124, such as one or more lights382, may be on an exterior of the back of the robot104. For example, a brake light may be arranged on the back surface of the robot104to provide users102an indication that the robot104is stopping. FIG.11is a top view1100of the robot104, according to some implementations. In some implementations, a microphone346(array) may be emplaced along an upper surface of the upper structure. For example, the microphone346(array) is shown here comprising 8 microphones346, two of which are concealed by the mast housing916. In some implementations, a manual handle release1102may be optionally provided. For example, the manual handle release1102when actuated may result in the carrying handle918being extended. FIG.12is a bottom1200or underside view of the robot104, according to some implementations. In this illustration, a pair of FOMS316are visible and arranged on the underside of the robot104proximate to the front and on the left and right sides, proximate to the wheels802. In another implementation, one or more FOMS316may be arranged along a centerline of the robot104running front and back. One or more optical sensors320may be mounted on the underside proximate to one or more of the front edge or back edge of the robot104. These optical sensors320may be used to detect the presence of a falling edge, such as a stair. For example, the optical sensors320mounted on the front or rear of the robot104may have a field-of-view (FOV) that results in a blind spot close to the robot104. In the event that the user102picks up and moves the robot104, the robot104could be placed into a situation where it is unable to move safely without toppling from a falling edge. As a result, optical sensors320may be mounted at or near the underside of the robot104to provide information about this region. In other implementations, other sensors122may be mounted elsewhere to determine falling edges. For example, the optical sensors320on the front of the robot104may have a FOV that is directed downwards to allow for detection of the falling edge. An output device124, such as one or more undercarriage lights may be provided. For example, a light382may be arranged on an underside of the robot104, proximate to or at a front edge. Also depicted is the caster804. In one implementation, the caster804may be freewheeling, that is free to swivel about. In another implementation, the caster804may be driven, such that a motor138or other actuator may change the direction of the caster804to facilitate steering of the robot104. FIG.13depicts a diagram1300of a docking station148with a secondary RF link interface306, according to some implementations. The docking station148may comprise a base plate1302. A housing1304may include electronics such as a power supply, one or more processors116, one or more communication interfaces, and so forth. The docking station148may obtain power from an electrical plug1306. For example, the electrical plug1306may be plugged into a household electrical outlet. In some implementations, the docking station148may include an uninterruptible power supply or alternative power source such as a fuel cell. A docking beacon1308provides indicia suitable for guiding the robot104into the docking station148at a predetermined location. At that predetermined location, the robot104may engage one or more base charging contacts1310or base data contacts1312. For example, the robot charging contacts1002may come in contact with the corresponding base charging contacts1310while the robot data contacts1006, come into contact with the corresponding base data contacts1312. In other implementations, one or more of the base charging contacts1310or the base data contacts1312may be along the base plate1302, or otherwise configured to mate with corresponding robot charging contacts1002or robot data contacts1006located on an underside of the robot104. In other implementations, wireless power transfer may be used to charge the robot104. For example, the base charging contacts1310may be omitted and a wireless inductive or wireless capacitive charging system may be used to provide electrical power to the robot104. The base charging contacts1310may be utilized to provide electrical power to charge the batteries106on board the robot104, supply power to the robot104for operation while docked, and so forth. The robot104or the docking station148may include battery charger circuitry to control charging of the batteries106on board the robot104. The battery charger circuitry may be configured to charge the batteries106at a first rate, second rate, and so forth. For example, the first rate may exhibit a lower amperage than the second rate, and may take longer to charge relative to the second rate. In some implementations, the first rate may be configured to minimize heating of cells within the batteries106on board the robot104, reduce charging damage, and so forth. The base data contacts1312may be used to provide data communication with the robot104while docked. For example, the base data contacts1312may be used to deliver updates to the instructions stored within the memory126of the robot104. The docking station148may include an antenna1314suitable for use with one or more of the network interfaces118. For example, the antenna1314may be used for the secondary RF link interface306. One or more optical beacons1316may be provided on the docking station148to facilitate the robot104locating the docking station148within the physical environment. For example, the optical beacons1316may be placed atop the antenna1314to provide and improve line of sight with the robot104. In some implementations, the robot104may utilize the camera344in the mast housing916to detect the optical beacon1316. For example, the mast914may be extended to increase the height of the mast housing916and the camera344therein. The increased height of the camera344combined with the location of the optical beacons1316atop the antenna1314may provide an improved line of sight for the robot104, facilitating locating of the docking station148by the robot104. The robot104may utilize a primary link1318to establish communication with other devices such as the network144, one or more servers146, the docking station148, other robots104, and so forth. For example, the primary link1318may comprise a WLAN interface302. In some implementations, the primary link1318may be unavailable. For example, a portion of the home may have inadequate Wi-Fi coverage. As described above, the docking station148may include a secondary RF link interface306that establishes a secondary RF link1320with the robot104. In the event that a primary link1318is unavailable, the robot104may maintain communication with the docking station148using the secondary RF link1320. The docking station148may utilize one or more network interfaces118on board the docking station148to establish communication with the network144or other devices. Continuing the example above, if the robot104finds itself in an area with inadequate Wi-Fi coverage, it may use the secondary RF link1320to access the docking station148and then access an external server146via the docking station148. In some implementations, the secondary RF link1320may also be used for navigational purposes. For example, secondary RF link1320may be used as a beacon to provide navigational input. In another example, the secondary RF link1320may be used to determine a distance from the docking station148. Continuing this example, the robot104may send a request to the docking station148which then responds within a predetermined time. Based on the value of the predetermined time and the propagation delay associated with transmission, an estimated distance between the docking station148and robot104may be determined. The docking station148may provide other functions as well. For example, the docking station148may include additional resources such as processors116, memory126, and so forth that allow the docking station148to provide various functions such as automated speech recognition, natural language understanding, and so forth. For example, if the robot104is unable to contact an external server146to process speech acquired using a microphone346, the audio data236may be transmitted to the docking station148for local processing by the docking station148. This may provide redundancy and still allow some functionality in the event that a wide area network connection, such as to the Internet, is unavailable. In some implementations, the docking station148may be configured to operate as an edge server to a network accessible resource, such as an external server146. The processes discussed in this disclosure may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more hardware processors116, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation. Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform the processes or methods described herein. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage media may include, but is not limited to, hard drives, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet. Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Thus, although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art. The circuitry presented here is provided by way of illustration and not necessarily as a limitation. The selection of particular components, the values of those components, and so forth may be varied to satisfy different operational conditions. For example, resistor values, selection of particular FETs, and so forth may vary to allow use with different size motors138. In some implementations, particular circuits or portions thereof may be replaced with other circuitry of equivalent performance. For example, discrete components may be replaced with an integrated circuit, alternative circuit connections may be used, and so forth. Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, environments, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims. | 119,923 |
11858129 | MODE FOR INVENTION Hereinafter, specific embodiments of the present disclosure will be described in detail with reference to the drawings. FIG.1is a perspective view of a mobile robot according to an embodiment of the present disclosure,FIG.2is a perspective view of a service module mounted to a mobile robot according to an embodiment of the present disclosure, andFIG.3is an exploded perspective view of a mobile robot according to an embodiment of the present disclosure. A mobile robot1according to the embodiment of the present disclosure may include a body100, a driving unit240, a module support plate400, a display unit500and600, and a rotation mechanism700. The body100may constitute the body portion of the mobile robot1. A length of the body100in the front-rear direction may be longer than a width of the body100in the left-right direction. As an example, the cross-section of the body100in a horizontal direction may have an approximately elliptical shape. The body100may include an inner module200and a housing300surrounding the inner module200. The inner module200may be positioned inside the housing300. The driving unit240may be provided with the inner module200in a lower portion thereof. The inner module200may include multiple plates and multiple frames. In more detail, the inner module200may include a lower plate210, an upper plate220positioned above the lower plate210, and a top plate230positioned above the upper plate220. In addition, the inner module200may further include a plurality of lower supporting frames250and a plurality of upper supporting frames260. The lower plate210may form a bottom surface of the body100. The lower plate210may be referred to as a base plate. The lower plate210may be horizontal. The lower plate210may be provided with the driving unit240. The upper plate220may be spaced apart upward from the lower plate210. The upper plate220may be referred to as a middle plate. The upper plate220may be horizontal. The upper plate220may be positioned between the lower plate210and the top plate230in the vertical direction. The lower supporting frame250may be positioned between the lower plate210and the upper plate220. The lower supporting frame250may be extending vertically. The lower supporting frame250may support the upper plate220from the lower side. The top plate230may form a top surface of the body100. The top plate230may be spaced upward from the upper plate220. The upper supporting frame260may be positioned between the upper plate220and the top plate230. The upper supporting frame260may be extending vertically. The upper supporting frame260may support the top plate230from the lower side. The housing300may form an outer peripheral surface of the main body100. A space in which the inner module200is disposed may be formed inside the housing300. The top and bottom surfaces of the housing300may be opened. The housing300may surround the edges of the lower plate210, the upper plate220, and the top plate230. In this case, an inner periphery of the housing300may be in contact with the edges of the lower plate210, the upper plate220, and the top plate230, but is not limited thereto. A front open portion OP1may be formed in a front portion of the housing300. The front open portion OP1may be opened toward the front. The front open portion OP1may be extending along the peripheral direction of the housing300. A front lidar275A may detect an obstacle or the like positioned in front of the mobile robot1through the front open portion OP1or perform mapping for a front region of the mobile robot1. A rear open portion OP2may be formed in a rear portion of the housing300. The rear open portion OP2may be opened toward the rear. The rear open portion OP2may be extending along the peripheral direction of the housing300. The rear lidar275B (seeFIG.4) may detect an obstacle or the like positioned behind the mobile robot1through the rear open portion OP2or perform mapping for a rear region of the mobile robot1. In addition, a back cliff sensor276B (seeFIG.4) may detect a state of a floor surface behind the mobile robot1through the rear open portion OP2. An upper open portion OP3may be formed in the front portion of the housing300. The upper open portion may be formed above the front open portion OP1. The upper open portion OP3may be opened toward the front side or a front lower side. The cliff sensor276A may detect the state of the floor surface in front of the mobile robot1through the upper open portion OP3. A plurality of openings303A may be formed in the housing300. In more detail, the opening303A may be formed in the top portion of the housing300. The plurality of openings303A may be spaced apart from each other along the peripheral direction of the housing300. Each ultrasonic sensor310may detect an object around the mobile robot1through the opening303A. The housing300may include a material having a first thermal conductivity, and the inner module200may include a material having a second thermal conductivity higher than the first thermal conductivity. In more detail, at least one of the lower plate210, the upper plate220, the top plate230, the lower supporting frame250and the upper supporting frame260may include a material having a second thermal conductivity higher than the first thermal conductivity. As an example, the housing300may include an injection plastic material, and at least one of the lower plate210, the upper plate220, the top plate230, the lower supporting frame250and the upper supporting frame260may include a metal material such as aluminum. Accordingly, a heat dissipation part disposed in the inner module200may be smoothly dissipated by conduction while preventing the housing300forming the appearance of the body100from becoming hot. The driving unit240may enable the mobile robot1to move. The driving unit240may be provided below the body100. In more detail, the driving unit240may be provided in the lower plate210. On the other hand, the module support plate400may be mounted on the top surface of the body100. The module support plate400is preferably a horizontal plate shape, but is not limited thereto. Like the body100, the module support plate400may be extending such that a length in the front-rear direction is longer than a width in the left-right direction. The module support plate400may support a service module M from the lower side. That is, the service module M may be seated and supported on the module support plate400. The service module M may be detachably mounted to the module support plate300. In this case, the mobile robot1of the present disclosure may be referred to as a “mobile module”, and the entire configuration including the mobile module1and the service module M may also be referred to as a “mobile robot”. However, to avoid confusion in the description, these names should not be used below. The service module M may be a transport object carried by the mobile robot1, and its type is not limited. Therefore, there is an advantage that it is possible to mount and use different service modules M to the same mobile robot1. As an example, the service module M may be a cart capable of receiving items. In this case, the mobile robot1equipped with a cart may be used in a mart, and a user has an advantage of not having to push the cart directly. The top surface of the body100, that is, the top plate230may be provided with at least one of at least one module guide231configured to guide the installation position of the service module M and at least one module fastening portion232fastened to the service module M. The module guide231and the module fastening portion232may protrude upward from the top plate230. The module guide231may pass through a sub-through hole411formed in the module support plate400, and prevent the service module M from shaking in the horizontal direction while guiding the installation position of the service module M. The module fastening portion232may pass through the sub-opening hole412formed in the module support plate400and be fastened to the service module M. Therefore, the service module M may be firmly mounted to the upper side of the module support plate400. The module guide231and the module fastening portion232may also be used as handles when carrying the mobile robot1. Meanwhile, the display unit500and600may be positioned above the front portion of the main body100. The display units500and600may be disposed to extend vertically. A height HD of the display unit500and600(seeFIG.4) may be higher than a height HB of the body100. In more detail, the display unit500and600may include a body display unit500and a head display unit600. The body display unit500may be integrally formed with the module support plate400. In this case, the body display unit500may be formed to extend upward from the front end of the module support plate400. However, it is of course possible that the body display unit500and the module support plate400are formed of separate members. A height of the body display unit500may be higher than a height of the body100. The body display unit500may include a body display540provided on a front surface thereof. The body display540may function as an output unit on which an image or video is displayed. At the same time, the body display540may include a touch screen to function as an input unit capable of enabling touch input. The body display unit500may be positioned in front of the service module M mounted on the module support plate400. In this case, a groove corresponding to a shape of the body display unit500may be formed in the front portion of the service module M, and the body display unit500may be fitted into the groove. That is, the body display unit500may guide a mounting position of the service module M. The head display unit600may be positioned above the body display unit500. The head display unit600may be rotatably connected to an upper portion of the body display unit500. In more detail, the head display unit600may include a neck housing620rotatably connected to the body display unit500. The rotation mechanism700may rotate the head display unit600through the interior of the neck housing620. The head display unit600may include a head display640provided on a front surface thereof. The head display600may face the front side or a front upper side. The head display640may display an image or video depicting a human expression. Accordingly, the user may feel that the head display unit600is similar to a human head. The head display unit600may rotate a certain range (for example, 180 degrees) left and right with respect to the vertical axis of rotation, like a human head. The rotation mechanism700may rotate the head display unit600with respect to the body display unit500. The rotation mechanism700may include a rotating motor and a rotating shaft rotated by the rotating motor. The rotating motor may be disposed inside the body display unit500, and the rotating shaft may extend from the interior of the body display unit500into the neck housing620and be connected to the head display unit600. FIG.4is a cross-sectional view taken along line A-A′ inFIG.1; A battery271and a control box272may be embedded in the body100. Further, the body100may include a front lidar275A and a rear lidar275B embedded therein. Electric power for the operation of the mobile robot1may be stored in the battery271. The battery271may be supported by the upper plate220of the inner module200. The battery271may be disposed between the upper plate220and the top plate230. The battery271may be disposed eccentrically from the interior of the body100to the rear. Also, the display unit500and600may be supported by the top plate230of the inner module200. The display unit500and600may be disposed above the front portion of the top plate230. The body display unit400may not overlap the battery271in the vertical direction. With the above configuration, the load of the battery271and the load of the body display unit500and the head display unit600may be balanced. Thereby, it is possible to prevent the mobile robot1from being tilted or overturned back and forth. The control box272may be disposed in front of the battery271. The control box272may be supported by the upper plate220of the inner module200. The control box272may be disposed between the upper plate220and the top plate230. At least a portion of the control box272may overlap the display unit500and600vertically. The control box272may include a box-shaped boxing case and a controller provided in the boxing case. A plurality of through holes may be formed in the boxing case to dissipate internal heat of the control box272. The controller may include a printed circuit board, and may control the overall operation of the mobile robot1. Since the control box272is positioned in front of the battery271, the load of the battery271eccentric to the rear and the load of the control box272may be balanced. Thereby, it is possible to prevent the mobile robot1from being tilted or overturned back and forth. The front lidar275A and the rear lidar275B may be provided in the front and rear portions of the body100, respectively. LIDAR is a sensor capable of detecting a distance and various properties of an object by radiating a laser beam to a target and the front lidar275A and the rear lidar275B may detect surrounding objects, terrain features, and the like. A controller of the control box272may receive information detected by the front lidar275A and the rear lidar275B, and perform3D mapping or control the driving unit240to avoid an obstacle based on the information. As described above, the front lidar275A may detect information on a front region of the mobile robot1through the front open portion OP1formed in a front portion of the body100. The rear lidar275B may detect information on a rear region of the mobile robot1through the rear open portion OP2formed in a rear portion of the body100. At least a portion of the front lidar275A may be positioned below the control box272. The front lidar275A and the rear lidar275B may be positioned at the same height inside the body100. In more detail, a vertical distance H1from the bottom surface of the body100to the front lidar275A may be equal to a vertical distance H2from the bottom surface of the body100to the rear lidar275B. In addition, the front lidar275A and the rear lidar275B may be disposed inside the body100at a lower position than the battery271. The front lidar275A and the rear lidar275B may be supported by the lower plate210of the inner module200. The front lidar275A and the rear lidar275B may be disposed between the lower plate210and the upper plate220. In more detail, a vertical distance H3from the bottom surface of the body100to the battery271may be greater than the vertical distance H1from the bottom side of the body100to the front lidar275A. In addition, the vertical distance H3from the bottom surface of the main body100to the battery271may be greater than the vertical distance H2from the bottom surface of the body100to the rear lidar275B. As a result, a space inside the body100may be effectively utilized as compared with a case where the front lidar275A and the rear lidar275B are disposed at the same height as the battery271. Therefore, the size of the body100may be made compact. A cliff sensor276A and a back cliff sensor276B may be embedded in the body100. The cliff sensor276A and the back cliff sensor276B may be supported by being suspended from the top plate230of the inner module200. The cliff sensor276A and the back cliff sensor276B may be disposed between the upper plate220and the top plate230. The cliff sensor may detect a state of the floor surface and the presence or absence of a cliff by transmitting and receiving infrared rays. That is, the cliff sensor276A and the back cliff sensor276B may detect the state of the floor surface of the front and rear regions of the mobile robot1and the presence or absence of a cliff. The controller of the control box272may receive information detected by the cliff sensor276A and the back cliff sensor276B, and control the driving unit240such that the mobile robot1avoids cliffs based on the information. As described above, the cliff sensor276A may detect the state of the floor surface in front of the mobile robot1through the upper open portion OP3. The back cliff sensor276B may detect the state of the floor surface behind the mobile robot1through the rear open portion OP2. The cliff sensor276A may be disposed above the front lidar275A. The back cliff sensor276B may be disposed above the rear lidar276B. At least a portion of the cliff sensor276A may be positioned above the control box272. The back cliff sensor276B may be positioned behind the battery271. That is, the cliff sensor276A may be disposed within the body100at a higher position than the back cliff sensor276B. In more detail, a vertical distance H4from the bottom surface of the body100to the cliff sensor276A may be greater than a vertical distance H5from the bottom surface of the body100to the back cliff sensor276B. As a result, a space inside the body100may be efficiently utilized as compared with a case where the cliff sensor276A is positioned in front of the control box272. Therefore, the body100may be compact with respect to the front-rear direction. Meanwhile, a wiring disconnect switch277may be embedded in the body100. The wiring disconnect switch277may cut off the power of the mobile robot1to immediately stop driving of the mobile robot1. The wire disconnect switch277may be positioned behind the front lidar275A. The wire blocking switch277may be supported by the lower plate210of the inner module200. FIG.5is a perspective view of a display unit and a module support plate according to an embodiment of the present disclosure,FIG.6is a perspective view of a display unit and a module support plate according to an embodiment of the present disclosure as viewed from different directions,FIG.7is a view illustrating a body display unit and a head display unit which are separated from each other, according to an embodiment of the present disclosure, andFIG.8is a bottom view of a display unit and a module support plate according to an embodiment of the present disclosure. As described above, the display unit500and600may include a body display unit500extending vertically and a head display unit600rotatably connected to an upper portion of the body display unit500. A first cover open portion OP4and a second cover open portion OP5may be formed in a front surface of the body display unit500. The first cover open portion OP4may be opened toward the front. A depth camera851(seeFIG.10) may detect a distance between a person and an obstacle positioned in front of the mobile robot through the first cover open portion OP4. In addition, the depth camera851may perform a face recognition camera function of recognizing a face of a person positioned in front of the mobile robot1. The second cover open portion OP5may be formed on the lower side of the first cover open portion OP4. The second cover open portion OP5may be opened toward a front lower side. The upper cliff sensor852(seeFIG.10) may detect a state of the floor surface in front of the mobile robot1through the second cover open portion OP5. In this case, the upper cliff sensor852may detect a state of the floor surface positioned in front of the mobile robot1in a wider range than the cliff sensor276A described above. On the other hand, the cliff sensor276A may detect the state of the floor surface in front of the mobile robot1more precisely than the upper cliff sensor852. A sound hole801may be formed in the body display unit500. The sound of a speaker (seeFIG.12) positioned inside the body display unit500may be emitted to the outside of the mobile robot1through the sound hole801. The sound hole801may be formed in the top surface of the body display unit500. The sound hole801may be formed on the left or right side of a neck insertion opening500A into which the neck housing620of the head display unit600is inserted. Accordingly, the body display unit500may be formed to be compact in the front-rear direction as compared to a case where the sound hole801is formed in front or rear of the neck insertion opening500A. A neck insertion opening500A may be formed in the upper portion of the body display unit500. The neck insertion opening500A may be formed by vertically penetrating the top surface of the body display unit500. The neck housing620of the head display unit600may be inserted into the neck insertion opening500. In addition, the upper portion of the rotation mechanism700may protrude upward from the neck insertion port500and may be introduced into the neck housing620. A lower opening portion500B may be formed in a lower portion of the body display unit500. The lower opening portion500B may be formed by opening a bottom surface of the body display unit500. An electric wire or harness connected to the body100may be connected to the interior of the body display unit500through the lower opening portion500B. In addition, the supporter810(seeFIG.11) disposed vertically inside the body display unit500may pass through the lower opening portion500B and be supported by the body100. A rear opening portion530A may be formed in a rear surface of the body display unit500. In more detail, the rear opening portion530A may be formed in a lower rear surface of the body display unit500. The electric wire or harness connected to the service module M (seeFIG.2) may be connected to the interior of the body display unit500through the rear opening portion530A. The rear opening portion530A may be opened and closed by a shutter550. A handle550A may be formed on the shutter550. The handle550A may be formed to protrude rearward from the lower rear surface of the shutter550. An operator may hold the handle550A and push the shutter550upward to open the rear opening530and connect the electric wire or harness connected to the service module M into the rear opening portion530A. Thereafter, the operator may mount the service module M on the module support plate400. Conversely, the operator may separate the service module M from the module support plate400and the electric wire or harness connected to the service module M from the service module M. Thereafter, the operator may hold the handle550A and push the shutter550downward to block the rear opening portion530A. The body display unit500may include a body housing510, a front cover520, a rear cover530, and a body display540. The body housing510may form the appearance of the body display unit500. The body housing510may be formed with an interior space in which a plurality of parts including a body display540are accommodated. At least a portion of the rear surface of the body housing510may be opened, and the rear cover530may cover the open rear surface of the body housing510. Therefore, the operator may easily access the interior space by opening the rear cover530. The plate mounting portion400A on which the module support plate400is mounted may be connected to the body display unit500. In one example, the body housing510may be integrally formed with the plate mounting portion400A. However, it is not limited thereto. The module support mounting portion400A may have a ring shape corresponding to the shape of the module support plate400. The body housing510may be formed to extend upward from the front end of the plate mounting portion400A. The front cover520may cover the body housing510and the body display540from the front. In more detail, the front cover520may cover the front and top surfaces of the body housing510. The front cover520may include a transparent material. The front cover520may function as a window of the body display540. A first cover open portion OP4and a second cover open portion OP5may be formed in the front surface of the front cover520. A sound hole801may be formed in a top surface of the front cover520. The rear cover530may cover the open rear surface of the body housing510. A rear opening portion530A which is to be opened and closed by the shutter550may be formed on the lower side of the rear cover530. In addition, the lower portion of the rear cover530may form a lower opening portion500B together with the lower portion of the body housing510. The body display540may display an image or a video toward the front. The body display540may be protected by the front cover520. In addition, the body display540may function as an input unit including a touch screen to enable touch input. Meanwhile, the head display unit600may be rotatably connected to the upper portion of the body display unit500. In more detail, the head display unit600may include a head housing610, a neck housing620, and a head display640. The head housing610may form the appearance of the head display unit600. The head housing610may have a generally disc shape, but is not limited thereto. The head housing610may be spaced apart upward from the body display unit500. The head housing610may include a front surface facing a front upper side and a rear surface facing a rear lower side. The front surface of the head housing610may include a flat surface, and may be covered by a glass cover including a transparent material. The rear surface of the head housing610may include a curved surface that is continuous with the outer surface of the neck housing620. The head housing610may rotate together with the neck housing620. The neck housing620may be referred to as a neck. The neck housing may have an approximately vertical hollow cylinder shape. That is, a hollow620A through which the rotation mechanism700passes may be formed in the interior of the neck housing620. The neck housing620may be inserted into and rotatably connected to the neck insertion opening500A formed in the upper portion of the body display unit500. The upper end of the neck housing620may be connected to the back surface of the head housing610. Since the rear surface of the head housing610faces the rear lower side, the upper end of the neck housing620may be formed to be inclined in a direction in which a height increases toward the rear. The neck housing620may be formed smaller than the head housing610. As a result, the appearance of the mobile robot1is similar to that of the person, and the user may feel the familiarity. The head display640may be provided on the front surface of the head housing610. The head display640may be covered by a glass cover covering the front surface of the head housing. The head display640may face a front upper side. The head display640may display an image or a video toward the upper front side. In addition, the head display640may function as an input unit capable of touch input by including a touch screen. The size of the head display640may be smaller than the size of the body display540. That is, the body display540may function as a main display, and the head display640may function as a secondary display. FIG.9is an exploded perspective view of a display unit and a module support plate according to an embodiment of the present disclosure,FIG.10is a cross-sectional view taken along line C-C′ inFIG.5, andFIG.11is a view showing the interior of a display unit according to an embodiment of the present disclosure. An interface module560may be disposed inside the body display unit500. The interface module560may be disposed inside the body display unit500at a lower position than the body display540. In more detail, a vertical distance from the body100to the body display540may be higher than a vertical distance from the body100to the interface module560. The interface module560may include a box-shaped module case560A and an interface controller560B (seeFIG.11) embedded in the module case560A. At least one fastening bracket561fastened to the supporter810to be described later may be formed in the module case560A. The fastening bracket561may be fastened to the supporter810at the rear side of the supporter810. That is, the interface module560may be supported by the supporter810. The interface controller560B may include an interface printed circuit board. The interface controller560B may control a number of parts included or disposed in the display unit500and600. In one example, the interface controller560B may control a video and an image displayed on the body display540and the head display640. Also, the interface controller560B may process a command input through a touch display included in at least one of the body display540and the head display640. In addition, the interface controller560B may control the rotation mechanism700. In addition, the interface controller560B may control an audio unit580. The configuration controllable by the interface controller560B is not limited thereto, and may be added, deleted, or changed. A hub unit570may be disposed inside the body display unit500. The hub unit570may mediate connections between electronic components included in the mobile robot1. The hub unit570may be formed with a plurality of slots to which connection terminals of an electric wire or harness are connected. In one example, the control box272(seeFIG.4) and the interface module560may be connected to the hub unit570by electric wires or harnesses. In addition, at least one of the body display, the head display, the rotating motor710, the depth camera851, the upper cliff sensor852, the audio unit580, and the speaker800may be connected to the hub unit570. Accordingly, electrical connection between each electronic component may be facilitated, and arrangement of electric wires or harnesses in the body display unit500may be simplified. The hub unit570may be positioned behind the body display540and may be positioned above the interface module560. The hub unit570may be supported by the supporter810. In more detail, at least one fastening bracket571fastened to the supporter810may be formed in the hub unit570. The fastening bracket571may be fastened to the supporter810at the rear side of the supporter810. The audio unit580may be disposed inside the body display unit500. The audio unit580is electrically connected to the speaker800to emit sound to the speaker800. The audio unit580may be positioned under the body display540and may be positioned in front of the shutter guide590to be described later. The audio unit580may be supported by the supporter810. In more detail, the audio unit580may be provided with an audio bracket581fastened to the supporter810. The audio bracket581may be fastened to the supporter810in front of the supporter810. The shutter550may be raised to open the rear opening portion530A or lowered to close the rear opening portion530A. In this case, a shutter guide590for guiding the opening and closing operation of the shutter550may be disposed inside the body display unit500. The shutter guide590may be positioned in a lower portion of the body display unit500. The shutter guide590may be positioned behind the audio unit580. The shutter guide590may be extending vertically. A pair of shutter guides590may be provided to be spaced apart from each other left and right. One of the shutter guides590may guide a left side portion of the shutter550, and the other of the shutter guides590may guide a right side portion of the shutter550. In more detail, protrusions may be formed to protrude outward from the left and right sides of the shutter550. During the opening and closing operation of the shutter550, the protrusion may be moved along the guide groove590A formed in the inner surface of each shutter guide590. In this case, the guide groove590A may include a vertical portion formed to be vertical and an inclined portion connected to the lower end of the vertical portion and inclined backward. Therefore, since the shutter550is opened, the shutter550is first moved to the rear and is then raised, the shutter550may not interfere with the rear cover530. The shutter guide590may be supported by the supporter810. In more detail, at least one fastening bracket591fastened to the supporter810may be formed in each shutter guide590. The fastening bracket591may be fastened to the supporter810in front of the supporter810. Meanwhile, a display mounting hole511in which the body display540is mounted may be formed in the body housing511. The display mounting hole511may be formed by opening a part of the front surface of the body housing510. The front surface of the body display540mounted in the display mounting hole511may be covered by the front cover520. In addition, a sensing module mounting hole512in which a sensing module850is mounted may be formed in the body housing511. The sensing module mounting hole512may be formed by opening a part of the front surface of the body housing510. The sensing module mounting hole512may be disposed above the display mounting hole511. The sensing module850may be disposed above the body display540. The sensing module850may include a depth camera851and an upper cliff sensor852. The depth camera851may be disposed above the upper cliff sensor852. Since the functions of the depth camera851and the upper cliff sensor852have been described above, a duplicate description is omitted. When the sensing module850is mounted in the display mounting hole511, the depth camera851may perform sensing on the front region of the mobile robot1through the first cover open portion OP4formed in the front cover520. In addition, the upper cliff sensor852may perform sensing on the front lower side of the mobile robot1through the second cover open portion OP5formed in the front cover520. In addition, the body housing511may be formed with a sound outlet513communicating with the sound hole801(seeFIG.5). In more detail, the sound outlet513may be formed to pass through the upper surface of the body housing510. A pair of sound outlets513may be formed on the left and right sides of the neck insertion opening500A. In addition, a first insertion opening510A may be formed in the top surface of the body housing510. The first insertion opening510A may form a neck insertion opening500A together with a second insertion opening520A formed in the top surface of the front cover520. In addition, the body housing511may be provided with at least one inner bracket519(seeFIG.11) for fastening the supporter810to the inner surface of the body display unit500. In more detail, the inner bracket519may fasten the inner surface of the body housing511to the supporter810. Thus, the load applied to the body housing511may be distributed to the supporter810, and the body housing511may be reinforced. Meanwhile, the rotation mechanism700may be disposed inside the display unit500and600. The rotation mechanism700may rotate the head display unit600with respect to the body display unit500. The rotation mechanism700may include a rotating motor710that is disposed inside the body display unit500and a rotating shaft720that is rotated by the rotating motor710. The rotation mechanism700may further include a head fastener740fastened to the head display unit600and a motor mounter750on which the rotating motor710is mounted. The rotating motor710may be positioned behind the sensing module850and may be positioned above the hub unit570. The rotating motor710may be supported by the supporter810. In more detail, the rotating motor710may be mounted to the motor mounter750, and the motor mounter750may be supported by the supporter810. In more detail, the motor mounter750may be provided with a fastening bracket (not shown) fastened to the supporter810. However, the present disclosure is not limited thereto, and it is also possible that the motor mounter750is directly fastened to the supporter810. The rotating motor710and the motor mounter750may be disposed below the neck housing620. The rotating shaft720is connected to the rotating motor710to rotate. The rotating shaft720may extend upward from the rotating motor710and pass through a hollow620A formed in the interior of the neck housing620. As a result, the neck housing620may be formed to be thinner as compared to a case where the rotating motor710is disposed inside the neck housing620. In addition, since the rotating motor710is not disposed inside the neck housing620, electric wires or harnesses may easily pass through the interior of the neck housing620. The head fastening portion740may rotate together with the rotating shaft720. Since the head fastening portion740is fastened to the head display unit600, the head display unit600may rotate together with the head fastening portion740and the rotating shaft720. In more detail, the head fastening portion740may be fastened to the rear portion of the head display640. However, the present disclosure is not limited thereto, and the head fastening portion740may be directly fastened to the head housing610or may be fastened to a separate bracket (not shown) provided in the head housing610. A more detailed configuration of the rotation mechanism700will be described later in detail. Meanwhile, the mobile robot1may include the speaker800disposed inside the body display unit500. In more detail, the speaker800may include a speaker unit electrically connected to an audio580to emit sound, and an enclosure surrounding the speaker unit. The speaker800may be disposed inside the upper portion of the body display unit500. In more detail, a vertical distance between the speaker800and the top surface of the body display unit500may be less than a vertical distance between the speaker800and the body100(seeFIG.1). The speaker800may be positioned on the upper side than the supporter810. In more detail, the vertical distance from the top surface of the body display unit500to the supporter810may be greater than the vertical distance from the top surface of the body display unit500to the bottom surface of the speaker800. The speaker800may face the sound hole801(seeFIG.5). In more detail, the speaker800may be disposed toward the sound hole801from the lower side of the sound outlet513. As a result, the sound of the speaker800may be smoothly emitted to the sound hole801. A pair of speakers800may be provided on the left side and the right side to be spaced apart from each other. The pair of speakers800A and800B may be referred to as a first speaker800A and a second speaker800B, respectively. The speaker800may be spaced apart from the rotation mechanism700, more specifically, the rotating motor710and the motor mounter750. In addition, the speaker800may be disposed at a position higher than the rotating motor710in the body display unit500. In more detail, a vertical distance n2from the top surface of the body display unit500to the rotating motor710is greater than a vertical distance n1from the top surface of the body display unit500to the bottom surface of the speaker800. With the above configuration, it is possible to minimize the adverse effect of rotation and vibration of the rotating motor710on the speaker800. Therefore, it is possible to provide an improved sound experience to a user. The speaker800may overlap the rotating shaft720in the horizontal direction. In more detail, the rotating shaft720may pass between the pair of speakers800A and800B. That is, the first speaker800A and the second speaker800B may be positioned on opposite sides of each other with respect to the rotating shaft720. In addition, the speaker800may overlap the supporting shaft770(seeFIG.12), which will be described later, in the horizontal direction. In addition, the speaker800may be positioned on the side of the neck housing620. At least a portion of the speaker800may overlap the neck housing620in the horizontal direction. In more detail, at least a portion of the speaker800may overlap a portion inserted into the body display unit500through the neck insertion opening500A, in a horizontal direction. Accordingly, it is possible to minimize the adverse effect of the rotation and vibration of the rotating shaft720on the speaker800. The speaker800may be supported by the body housing510and may be spaced apart from the supporter810. For example, a speaker mounting portion (not shown) on which the speaker800is mounted may be formed in the interior of the body housing510. Accordingly, vibration of the rotating motor710transferred to the supporter810through the motor mounter750may not be transferred to the speaker800. Therefore, it is possible to prevent the vibration of the rotating motor710and the vibration of the speaker800from causing resonance. Meanwhile, the supporter810may be extending vertically inside the body display unit500. In more detail, the supporter810may be extending vertically inside the body housing510. The supporter810may be a vertical frame including a metal material. The lower end of the supporter810may be supported by the body100(seeFIG.3). In more detail, the lower end of the supporter810may be supported by the top plate230of the body100. That is, the supporter810may extend upward from the body100and may be positioned inside the body display unit500by passing through the lower opening portion500B. A plurality of supporters810may be provided to be spaced apart from each other left and right. In one example, a pair of supporters810may be provided. The pair of supporters810A and810B may be referred to as a first supporter810A and a second supporter810B, respectively. The supporter810may support at least one of the interface module560, the hub unit570, the audio unit580, and the shutter guide590. The supporter810may be fastened to the motor mounter750to support the rotation mechanism700. In addition, the supporter810may support the load of the head display unit600connected to the rotation mechanism700. Therefore, since the above structures are not supported by the body housing510, the body housing510may be formed to be thin and compact. The supporter810may be fastened to the body housing510through the inner bracket519to reinforce the body housing510. In addition, the load of the body display540, the speaker800, and the sensing module850supported by the body housing510may be distributed to the supporter810. FIG.12is a view showing a rotation mechanism according to an embodiment of the present disclosure,FIG.13is an exploded perspective view of a rotation mechanism according to an embodiment of the present disclosure, andFIG.14is a view showing a bottom surface of a shaft connecting body according to an embodiment of the present disclosure. InFIG.12, the body housing510, the rear cover520, and the neck housing620have been removed. The rotation mechanism700may include a rotating motor710, a rotating shaft720, a shaft connecting body730, and a head fastening portion740. The rotation mechanism700may further include a motor mounter750, a shaft support portion760, and a supporting shaft770. As described above, the rotating motor710may be disposed inside the body display unit500. The rotating motor710may be positioned behind the sensing module850and may be disposed at a lower position than the speaker800. The rotating motor710may be spaced apart from the speaker800. The rotating motor710may be mounted on the motor mounter750. The rotating shaft720may be rotated by the rotating motor710. The rotating shaft720may be extending vertically. The lower end of the rotating shaft720may be connected to the rotating motor710, and the upper end thereof may be connected to the shaft connecting body730. The upper end and the lower end of the rotating shaft720may be rotatably supported by bearings724and725, respectively. Accordingly, the rotating shaft720may be rotated smoothly without being shaken in the horizontal direction. In more detail, the rotating shaft720may include a shaft721, a lower connection portion722formed at the lower end of the shaft721, and an upper connection portion723formed at the upper end of the shaft721. The shaft721may be extending vertically. The lower connection portion722may be connected to the rotating motor710. The diameter of the lower connection portion722may be larger than the diameter of the shaft721. The lower bearing752may rotatably support the lower connection portion722in the horizontal direction. A lower bearing mounting portion751on which the lower bearing752is mounted may be formed in the motor mounter750. The upper connection portion723may be connected to the shaft connecting body730. The diameter of the upper connection portion723may be smaller than the diameter of the shaft721. The upper bearing764of the shaft support portion760may rotatably support the upper connection portion723in the horizontal direction. The shaft connecting body730may be connected to the upper connection portion723of the rotating shaft720. A fitting hole into which the upper connection portion723is fitted may be formed in the shaft connecting body730. The shaft connecting body730may be rotated together with the shaft720. A head fastening portion740fastened to the head display unit600may be coupled to a top surface of the shaft connecting body730. Therefore, the head fastening portion740may be rotated together with the rotating shaft720and the shaft connecting body730. A locking protrusion733may be formed on the shaft connecting body730. The locking protrusion733may be caught by a limiter763, which will be described later, to limit a rotation range of the head display unit600. In more detail, the shaft connecting body730may include a panel portion731and a disc portion732formed under the panel portion731. The panel portion731may have a generally rectangular plate shape, but is not limited thereto. The head fastening portion740may be fastened to the top surface of the panel part731. The disc portion732may be formed smaller than the panel portion731. The disc portion732may be formed to protrude downward from the bottom surface of the panel portion731. The disc portion732may be integrally formed with the panel portion731, but is not limited thereto. A locking protrusion733may be formed in the disc portion731. The locking protrusion733may be formed to protrude in the radially outward direction from an outer periphery of the disc portion731. Meanwhile, the shaft support portion760may rotatably support the rotating shaft720in the horizontal direction. The shaft support portion760may be positioned below the shaft connecting body730. The shaft support portion760may be a fixed structure which is not rotated. In addition, a limiter763may be formed to protrude upward from the shaft support portion760. The locking protrusion733formed on the shaft connecting body730may be caught by the limiter763so that the rotation range is limited. Accordingly, the rotation range of the head display unit600may be easily limited, and the operation of the mobile robot1is similar to that of the person, so that the user may feel the familiarity. In more detail, the shaft support portion760may include an annular portion761, a plurality of protrusions762, and an upper bearing764. The annular portion761may be penetrated by the rotating shaft720. That is, the rotating shaft720may be connected to the shaft connecting body730by passing through the annular portion761. An upper bearing portion764may be mounted on the inner peripheral surface of the annular portion761. The upper bearing764may rotatably support the rotating shaft720while being in contact with the rotating shaft720in a state of being mounted to the annular portion761. The protrusion762may protrude from the annular portion761in the radially outward direction. A plurality of protrusions762may be formed to be spaced apart from each other by a predetermined distance in the peripheral direction of the annular portion761. In one example, the number of the protrusions762may be three. A limiter763may be formed on any one of the plurality of protrusions762. The limiter763may be formed to protrude upward from the protrusion762. A plurality of protrusions762may be respectively connected to a plurality of supporting shafts770. Thus, the shaft support portion760may be fixed. Since the shaft support portion760includes the annular portion761and the protrusion762, the size of the annular portion761may be made compact while the supporting shaft770is sufficiently spaced apart from the rotating shaft720. Therefore, even when the shaft support portion760is disposed inside the neck housing620(seeFIG.10), an electric wire or a harness may easily pass through the interior of the neck housing620. The supporting shaft770may fix the shaft support portion760. The supporting shaft770may be extending vertically. The supporting shaft770may extend into the body display unit500through the interior of the neck housing620(seeFIG.10) from the shaft support portion760, more specifically the protrusion762. The lower end of the supporting shaft770may be connected to the motor mounter750. A plurality of supporting shafts770may be provided to be connected to a plurality of protrusions762, respectively. In this case, the number of supporting shafts770may be the same as the number of protrusions762of the shaft support portion760. Accordingly, the plurality of supporting shafts770may firmly fix the shaft support portion760. Hereinabove, although the present disclosure has been described with reference to exemplary embodiments and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed in the following claims. Therefore, the exemplary embodiments of the present disclosure are provided to explain the spirit and scope of the present disclosure, but not to limit them, so that the spirit and scope of the present disclosure is not limited by the embodiments. The scope of the present disclosure should be construed on the basis of the accompanying claims, and all the technical ideas within the scope equivalent to the claims should be included in the scope of the present disclosure. | 50,100 |
11858130 | DETAILED DESCRIPTION A controller of a robot according to the embodiment will be described with reference toFIG.1toFIG.10. The controller of the robot of the present embodiment is formed so that an operator can manually drive the robot. Furthermore, the controller is formed so that an inching operation in which a position and an orientation of the robot are changed by a minute driving amount can be performed. Control for manually driving a robot can be used, for example, when a teaching point is set. FIG.1is a schematic view of a robot apparatus in the present embodiment. The robot apparatus9includes a hand2as a working tool and a robot1that moves the hand2. The robot1of the present embodiment is an articulated robot having a plurality of joints. Each constituent member of the robot1is formed so as to rotate around a drive axis of the robot1. The robot1of the present embodiment includes a base14and a rotation base13supported by the base14. The rotation base13rotates relative to the base14. The robot1includes an upper arm11and a lower arm12. The lower arm12is supported by a rotation base13. The upper arm11is supported by the lower arm12. The robot1includes a wrist15which is supported by the upper arm11. The hand2for gripping or releasing the workpiece is fixed to the flange16of the wrist15. The robot1of the present embodiment includes six driving axes, but the embodiment is not limited to this. A robot that changes the position and the orientation by any mechanism can be employed. Further, the working tool of the present embodiment is the hand that grips the workpiece, but is not limited to this embodiment. The operator can select a working tool according to the operation that is performed by the robot apparatus. For example, a working tool for removing burrs, a working tool for performing welding, or a working tool for applying an adhesive, or the like can be employed. A reference coordinate system71is set in the robot apparatus9of the present embodiment. In the example illustrated inFIG.1, an origin of the reference coordinate system71is arranged at the base14of the robot1. The reference coordinate system71is also referred to as a world coordinate system. The reference coordinate system71is a coordinate system in which the position of the origin is fixed, and further, the orientation of the coordinate axis is fixed. Even if the position and the orientation of the robot1change, the position and the orientation of the reference coordinate system71do not change. The reference coordinate system71has an X axis, a Y axis, and a Z axis which are orthogonal to each other as coordinate axes. Additionally, a W axis is set as the coordinate axis around the X axis. A P axis is set as the coordinate axis around the Y axis. An R axis is set as the coordinate axis around the Z axis. In the present embodiment, the tool coordinate system72that has an origin which is set at any position in the working tool is set. The origin of the tool coordinate system72of the present embodiment is set to a tool center point70. The tool coordinate system72has an X axis, a Y axis, and a Z axis which are orthogonal to each other as coordinate axes. In the example illustrated inFIG.1, the tool coordinate system72is set so that the extending direction of the Z axis is parallel to the direction in which the claw part of the hand2extends. Furthermore, the tool coordinate system72has a W axis around the X axis, a P axis around the Y axis, and an R axis around the Z axis. For example, the position of the robot1corresponds to the position of the tool center point70. Furthermore, the orientation of the robot1corresponds to the orientation of the tool coordinate system72with respect to the reference coordinate system71. FIG.2is a block diagram illustrating the robot apparatus of the present embodiment. Referring toFIG.1andFIG.2, the robot1includes a robot driving device that changes the position and the orientation of the robot1. The robot driving device includes a robot driving motor22that drives constituent members such as an arm and a wrist. In the present embodiment, a plurality of robot driving motors22are arranged corresponding to the respective drive axes. The robot apparatus9includes a hand driving device that drives the hand2. The hand driving device includes a hand driving motor21that drives the claw part of the hand2. The claw part of the hand2is opened or closed by driving the hand driving motor21. The robot apparatus9includes a controller4that controls the robot1and the hand2. The controller4includes a processing device40that performs the control, and a teach pendant37for the operator to operate the processing device40. The processing device40includes an arithmetic processing device (computer) including a Central Processing Unit (CPU) serving as a processor, and a Random Access Memory (RAM) and a Read Only Memory (ROM) or the like connected to the CPU via a bus. The robot1and the hand2are connected to the controller4via a communication device. The teach pendant37is connected to the processing device40via a communication device. The teach pendant37includes an input part38for inputting information regarding the robot1and the hand2. The input part38is constituted of a keyboard, a dial, and the like. The operator can input the set value of the variables, allowable ranges of the variables, and the like into the processing device40from the input part38. The teach pendant37includes display part39that displays information regarding the robot1and the hand2. The operator can set a teaching point of the robot1by manually driving the robot1. The controller4can generate an operation program for performing the operation of the robot1and the hand2based on the teaching point. Alternatively, the operation program may be input to the controller4. The processing device40includes a storage unit42that stores information regarding the control of the robot1and the hand2. The storage unit42can be configured by a storage medium capable of storing information, such as a volatile memory, a non-volatile memory, or a hard disk. The processor of the arithmetic processing device is formed to be able to read the information stored in the storage unit42. The operation program is stored in the storage unit42. The robot apparatus9of the present embodiment conveys the workpiece based on the operation program. The robot1can automatically convey the workpiece from an initial position to a target position. The processing device40includes an operation control unit43configured to control the operation of the robot1. The operation control unit43corresponds to a processor that is driven in accordance with the operation program. The processor reads the operation program and functions as the operation control unit43by performing the control that is defined in the operation program. The operation control unit43sends operation commands for driving the robot1based on the operation program to a robot driving unit45. The robot driving unit45includes an electric circuit that drives the robot driving motor22. The robot driving unit45supplies electricity based on the operation commands to the robot driving motor22. Further, the operation control unit43controls the operation of the hand2. The operation control unit43sends operation commands for driving the hand2based on the operation program to a hand driving unit44. The hand driving unit44includes an electric circuit that drives the hand driving motor21. The hand driving unit44supplies electricity based on the operation commands to the hand driving motor21. The robot1includes a status detector for detecting the position and orientation of the robot1. The status detector of the present embodiment includes a position detector19which is attached to the robot driving motor22corresponding to the driving axis of an arm, and the like. The orientation of the constituent member along each driving axis can be acquired from an output from the position detector19. For example, the position detector19detects a rotational angle when the robot driving motor22is driven. In the present embodiment, the position and the orientation of the robot1are detected based on the output from a plurality of the position detector19. FIG.3is an enlarged perspective view of an operation device of the present embodiment. Referring toFIG.1toFIG.3, the controller4of the robot of the present embodiment includes an operation device5for an operator to perform an operation of manually changing the position and the orientation of the robot1. In the present embodiment, the operation device5is fixed to a stationary frame81. The operator can perform manual operation of the robot1by standing in the vicinity of the frame81and operating the operation device5. The operation device5of the present embodiment includes an arithmetic processing device including a CPU serving as a processor, RAM, and the like. The operation device5includes a wireless communication unit58including a communication member for performing communication. The teach pendant37also includes a wireless communication unit including a wireless dongle36as a communication member. The operation device5is formed so as to be able to communicate with each other wirelessly with the teach pendant37. Further, the operation device5is formed so as to be able to communicate with the processing device40via the teach pendant37. The processing device40can process a signal from the operation device5. The operation device5of the present embodiment is formed so as to be able to communicate wirelessly with the teach pendant37, but the embodiment is not limited to this. The operation device5may be connected to teach pendant37via a communication line. Alternatively, the operation device5may be formed so as to be able to communicate with the processing device40wirelessly or by the communication line. The operation device5of the present embodiment includes a main body51and a stick52serving as a movable part that is supported by the main body51. The movable part is constituted by a movable member that moves by being operated by an operator. The stick52is formed so as to move relative to the main body51. The main body51is fixed to the frame81. InFIG.3, an operation coordinate system73for illustrating the operation of the stick52is described. The origin of the operation coordinate system73can be arranged on the axis of the stick52when the hand is released from the stick52. Also, the orientation of the operation coordinate system73can be set to any orientation. In the example illustrated inFIG.3, the Z axis is arranged so as to overlap the axis of the stick52when the hand is released from the stick52. The stick52of the present embodiment is formed so as to be able to tilt in the X axis direction and the Y axis direction of the operation coordinate system73. That is, the stick52is formed so as to rotate about a predetermined rotation center. Additionally, the stick52is formed so as to be able to tilt even in an intermediate region sandwiched between the X axis and the Y axis. As described above, the stick52is formed so as to be able to tilt in any direction. Additionally, the stick52is formed so as to be able to perform a pulling operation or a pushing operation in the Z axis direction. A spring serving as an elastic member that biases the stick52back to the neutral state is arranged inside the main body51. The neutral state is, for example, a state in which the axis of the stick52extends in the vertical direction. In the Z axis direction, the center position of the operable range of the pushing operation and the pulling operation corresponds to the position of the neutral state. Furthermore, the operation device5of the present embodiment is formed so that the stick52can be rotated in the direction of the R axis around the Z axis of the operation coordinate system73. In other words, it is formed so that the stick52can be twisted by the operator. The position of the neutral state at this time corresponds to a central position in the range in which the twisting operation along the R axis is possible. The arithmetic processing device of the operation device5includes an operation detection unit57for detecting an operation of the operator. The operation detection unit57corresponds to a processor that is driven in accordance with the operation program. The processor reads the operation program and functions as the operation detection unit57by performing the control defined in the operation program. When the operator operates the stick52, the operation detection unit57detects an operation direction and an operation amount of the stick52. As the operation amount, the movement amount of the stick52at a predetermined position or an angle of rotation around the center of rotation when the stick52rotates can be detected. The operation detection unit57can detect the operation direction and the operation amount by using each the coordinate value of the operation coordinate systems73. In particular, the operation detection unit57is formed so that the operation amount from the neutral state of the stick52can be detected. In this way, the operation detection unit57can detect the operation that is performed by the operator. Note that the operation detection unit may not include a CPU as a processor. The operation detection unit may be formed so that the operation of the stick by the operator can be detected by a mechanical mechanism. The operation device5of the present embodiment is fixed to the frame81, but the embodiment is not limited to this. The operation device5can be arranged in various positions. For example, the operation device5can be fixed to a support member by arranging the support member on the wrist15or the hand2of the robot1. That is, the operation device5can be arranged so as to move with the wrist15or the hand2of the robot1. In this case, the operator can manually change the position and the orientation of the robot1by standing in the vicinity of the robot1and operating the operation device5. Alternatively, the operator may hold the operation device by a hand so as to operate the operation device. FIG.4is a schematic view of an operation device and a robot for illustrating the correspondence between the operation direction of the stick of the operation device and the direction in which the robot is driven. Referring toFIG.2andFIG.4, the processing device40of the present embodiment is formed to be able to perform a jog operation in which the position and the orientation of the robot1are continuously changed, during the period in which the operation of the stick52is performed. Furthermore, the processing device40is formed to be able to perform the inching operation that changes the position and the orientation of the robot1by the predetermined minute amount in response to the operation of the stick52. The operation device5of the present embodiment includes a button53that switches between a jog mode for performing a jog operation and an inching mode for performing an inching operation. The operator can switch between the jog mode and the inching mode by pressing the button53. (Control of Jog Operation) The jog operation for continuously changing the position and the orientation of the robot1will be described. When performing the jog operation, the operator sets the control mode to the jog mode. The processing device40changes the position and the orientation of the robot1during the period in which the stick52is being operated. In the jog operation, the position and the orientation of the robot1are changed in response to the operation direction and the operation amount of the stick52of the operation device5. The processing device40of the present embodiment includes a manual control unit61that changes the position and the orientation of the robot1in response to the operation of the operation device5. The manual control unit61sends an operation command to the operation control unit43in response to the operation of the operation device5. Then, the operation control unit43changes the position and the orientation of the robot1based on the operation command. As described below, the manual control unit61includes a time measuring unit62and a force detection unit63. Each unit of the manual control unit61, the time measuring unit62, and the force detection unit63corresponds to a processor that is driven in accordance with the operation program. The processor functions as each unit by reading the operation program and performing control defined in the operation program. In the jog operation, the manual control unit61can drive the robot1so that the direction of the coordinate axis in the operation coordinate system73corresponds to the direction of the coordinate axis of the tool coordinate system72. For example, when the operator tilts the stick52in the X axis direction of the operation coordinate system73, the manual control unit61controls the robot1so that the tool center point70moves in the direction of the X axis of the tool coordinate system72. In the present embodiment, the operation device5is fixed to the frame. The position and the orientation of the operation device5is immobile, thus, the position and the orientation of the operation coordinate system73is fixed. The operator can input the position and the orientation of the operation coordinate system73as a set value48. For example, the operator can input the position and the orientation of the operation coordinate system73with respect to the reference coordinate system71in advance. Further, the manual control unit61can calculate the position and the orientation of the tool coordinate system72with respect to the reference coordinate system71based on the output of the position detector19. For this reason, the manual control unit61can calculate the relative position and the relative orientation of the tool coordinate system72with respect to the operation coordinate system73. Similarly, when the operation device5is fixed to the hand or the wrist of the robot, the position and the orientation of the operation coordinate system73with respect to the reference coordinate system71is calculated based on the output of the position detector19. For this reason, the manual control unit61can calculate the relative position and the relative orientation of the tool coordinate system72with respect to the operation coordinate system73. When the operator operates the stick52in one direction of the operation coordinate system73, the manual control unit61changes the position and the orientation of the robot1such that the tool coordinate system72(hand2) moves in the direction in which the stick52is operated. Here, the controller4is formed so as to be able to switch between a translation operation that moves the position of the tool center point70while maintaining the orientation of the hand2, and a rotation operation that changes the orientation of the hand2while maintaining the position of the predetermined point. In the rotation operation of the present embodiment, the hand2rotates about the tool center point70as the center of rotation. The operator may switch between the translation operation and the rotation operation by pressing a button55arranged on the main body51. For example, referencingFIG.3, in the translation operation, by tilting the stick52in the direction of the X axis of the operation coordinate system73, the hand2is controlled so as to move in the direction of the X axis of the tool coordinate system72. In the rotation operation, by tilting the stick52in the direction of the X axis of the operation coordinate system73, the hand2is controlled so as to rotate in the direction of the W axis of the tool coordinate system72. A movement speed and a rotation speed of the hand2in the jog operation (the movement speed and the rotational speed of the tool coordinate system72) can be predetermined by the operator and stored in the storage unit42as set values48. Furthermore, regarding the movement amount of the jog operation, the robot1continues to be driven during the period when the operator is operating the stick52. In the above embodiment of the jog operation, the position and the orientation of the robot1are changed so that the coordinate axes of the operation coordinate system73correspond to the coordinate axes of the tool coordinate system72, but the embodiment is not limited to this. The robot1can be controlled so that the working tool moves or rotates in any direction according to the operation of the stick52of the operation device5. For example, the manual control unit61may change the position and the orientation of the robot1so that the coordinate axes of the operation coordinate system73correspond to the coordinate axes of the reference coordinate system71. For example, when the stick52tilts in the direction of the X axis of the operation coordinate system73, the manual control unit61may control the position of the tool center point70so as to move in the direction of the X axis of the reference coordinate system71. Alternatively, the manual control unit can control the wrist so as to move in a direction in which the stick is operated, when the operation device is arranged on the wrist of the robot. In this case, the coordinate axis of the operation coordinate system and the coordinate axis of the tool coordinate system may be set so that the direction in which the stick is operated matches the direction in which the wrist is moved. (Control of Inching Operation) Next, an inching operation in which the position and the orientation of the robot1are changed by a predetermined minute amount will be described. When performing the inching operation, the operator switches the control mode to the inching mode by pressing a button53. The drive amount of the robot in the inching mode is minute. For example, in the translation operation, the movement distance of the hand2(the movement distance of the tool coordinate system72) is set to be 10 mm or less. More preferably, the movement distance of the hand2is set to be 5 mm or less. In the rotation operation, for example, the rotation angle of the hand2(the rotation angle of the tool coordinate system72) is set to be 5° or less. More preferably, the rotation angle of the working tool is set to be 1° or less. (Determination Control for Determining Whether or not Inching Operation is Performed) First, determination control for determining whether or not an inching operation is performed will be described. A first control of the determination control is similar to the control of the jog operation. When the operator moves the stick52, the manual control unit61performs a control for performing the inching operation. That is, regardless of the time length in which the stick52is operated or the force applied to the stick52, when the stick52is operated, the manual control unit61performs the inching operation by a predetermined movement amount. In a second control and a third control of the determination control, the manual control unit61determines whether or not the inching operation is performed based on the elapsed time from the time when the stick52is operated. Referring toFIG.2, the manual control unit61of the present embodiment includes a time measuring unit62that measures the time associated with the operation of the stick52. The determination value of the elapsed time of the operation can be predetermined by the operator. For example, a time of 1 second or less can be set as the determination value of the elapsed time. The operator can store the determination value in the storage unit42as the set value48. In the second control of the determination control, the manual control unit61does not perform the inching operation when the elapsed time from the time when the stick52is operated until the hand is released is longer than the predetermined determination value. On the other hand, the manual control unit61performs the inching operation when the elapsed time from the time when the stick52is operated until the hand is released is equal to or less than the predetermined determination value. The manual control unit61determines that the operator has released a hand from the stick52when the stick52returns to the neutral state. In the second control, when the operator continues to operate the stick52for a longer period of time than the predetermined determination value, the manual control unit61does not perform the inching operation. On the other hand, when the operator releases the hand in a period of time equal to or less than the predetermined determination value, the manual control unit61performs the inching operation one time. In the inching operation, the position and the orientation of the robot1slightly change. In the second control, the inching operation is performed when the operator operates the stick52in a short period of time. In the second control, when performing the inching operation of a slight movement amount, it is sufficient to perform an operation for a short time, and the operator can perform the inching operation without discomfort. Alternatively, the operator may sometimes misunderstand that the current control mode is the jog mode even though the current control mode is the inching mode. In such a case, the robot1can be avoided from being driven by operating the stick52for a long time. In the third control of the determination control, the manual control unit61does not perform the inching operation when the elapsed time from the time when the stick52is operated until the hand is released, is shorter than the predetermined determination value. On the other hand, when the elapsed time from the time when the stick52is operated is equal to or greater than the predetermined determination value, the manual control unit61performs the inching operation. In the third control, the manual control unit61does not perform the inching operation when the operator released the hand from the stick52within a shorter time than the predetermined determination value. On the other hand, when the stick52continues to be operated in a period of time equal to or greater than the predetermined determination value, the manual control unit61performs the inching operation one time. By performing the third control, it is possible to avoid performing the inching operation when the operator mistakenly operates the stick or the operator strikes against the stick. Next, in a fourth control and a fifth control of the determination control, the manual control unit61determines whether or not the inching operation is performed based on the magnitude of the force with which the stick52is operated. Such a determination value of the force can be set in advance by the operator. The determination value can be stored in the storage unit42as the set value48. The determination value of magnitude can, for example, be set to a value of 5N or less. More preferably, the determination value of the magnitude of the force can be set to a value of equal to or less than 2N. Alternatively, the determination value of the force may be set based on the movement amount of the stick52. For example, the determination value of the force may be set to a magnitude at the time when the stick52is operated to a position that is ½ of the maximum movement amount. The manual control unit61includes a force detection unit63that detects the magnitude of the force applied to the stick52when the stick52is operated. The force detection unit63detects the magnitude of the force with which the stick52is operated based on the output of the operation detection unit57. The force detection unit63of the present embodiment acquires the operation amount from the neutral state of the stick52from the operation detection unit57. The force detection unit63can calculate a magnitude of the force applied by the operator based on the operation amount of the stick52. In the present embodiment, as described above, the spring is arranged inside the main body51such that the stick52returns to the neutral state. The greater the amount of operation detected by the operation detection unit57, the greater the force applied by the operator. The force detection unit63can calculate the magnitude of the force applied to the stick52based on the operation amount of the stick52. In the fourth control of the determination control, the manual control unit61performs a control that does not perform the inching operation when the magnitude of the force with which the stick52is operated is equal to or greater than the predetermined determination value. On the other hand, the manual control unit61performs the inching operation when the magnitude of the force with which the stick52is operated is less than a predetermined determination value. The manual control unit61performs the inching operation one time for one operation. In a fourth control, an inching operation can be performed with a small force. The inching operation can be performed with a slight movement amount of the stick52. For this reason, the inching operation can be performed in a short period of time when performing the inching operation a plurality of times. For this reason, the robot1can be driven in a short time to the target position and orientation. In the fifth control of the determination control, the manual control unit61does not perform the inching operation when the magnitude of the force with which the stick52is operated is equal to or less than the predetermined determination value. On the other hand, when the magnitude of the force at the time when the stick52is operated is greater than the predetermined determination value, the manual control unit61performs the inching operation. The manual control unit61performs the inching operation one time for one operation. In the fifth control, the inching operation is performed when the force applied to the stick52is great. For this reason, the inching operation can be avoided when the operator mistakenly touches the stick52. Note that, in the fourth control and the fifth control, the time length of operating the stick52may be measured by the time measuring unit62. The range of time length of operating the stick52can be determined. The manual control unit61may determine the force applied to the stick within this range of time length. Furthermore, the processing device40may be formed so that the plurality of the controls described above can be performed with respect to a determination control that determines whether or not the inching operation of the robot1is performed. In this case, the processing device40can be formed so as to switch between the plurality of the controls. In the determination control of the present embodiment, it is possible to determine whether or not the inching operation is performed by the operation of the operation device, thus, the operability of the manual operation is improved. (Movement Direction Control Regarding Movement Direction of Working Tool) Next, control regarding the movement direction of the working tool will be described. In a first control regarding the movement direction of the working tool, control which is similar to the aforementioned control of the jog operation is performed even in the inching operation. The manual control unit61can change the position and the orientation of the robot1so that the hand2moves toward the direction of the tool coordinate system72corresponding to the direction of operation of the stick52in the operation coordinate system73. For example, the operator selects the translation operation or the rotation operation with the button55. The manual control unit61acquires the operation of the stick52based on the operation coordinate system73. The manual control unit61changes the position and the orientation of the robot1so that the hand2moves or rotates in the direction of the tool coordinate system72corresponding to the direction of operation of the stick52. For example, when the operator tilts the stick52in the direction of the X axis of the operation coordinate system73in the control mode in which the translation operation is performed, the manual control unit61changes the position and the orientation of the robot1such that the tool center point70moves in the direction of the X axis of the tool coordinate system72. In the first control, since the direction in which the stick52is operated corresponds to the direction in which the hand2moves, the operator can operate the robot1even without looking at the teach pendant37. For example, the robot1can be operated while viewing the robot1without confirming the position of the button that is corresponded to the direction of operation and that is arranged on the teach pendant37. Next, in the second control regarding the movement direction of the working tool, the manual control unit61performs the inching operation such that the hand2moves in the predetermined direction regardless of the type of operation of the stick52. The operator can set the setting direction in which the hand2moves and store the setting direction as the set value48in the storage unit42. For example, the direction on the positive side of the Y axis of the reference coordinate system71can be set as the setting direction in which the hand2moves. Even if the operator tilts the stick52in the X axis direction of the operation coordinate system73, tilts it in the Y axis direction, pushes it in the Z axis direction, and pulls it in the Z axis direction, the manual control unit61controls the robot1so that the tool center point70moves in the positive direction of the Y axis. Alternatively, the operator can set a curved movement path along which the hand2moves in advance. The operator can store the curved movement path as a set value48in the storage unit42. The movement path can be set in any coordinate system. For example, the path represented by the reference coordinate system71can be set as the movement path. The movement path includes the orientation of the hand2in addition to the direction in which the hand2moves. FIG.5illustrates a schematic diagram illustrating a curved movement path. The coordinate axes of the reference coordinate system71are illustrated inFIG.5. The operator can set the movement path85in which the tool center point70moves in the space around the robot1. Furthermore, the orientation of the hand2can be set in the movement path85. For example, a tool center point70is arranged at a position P0. By operating the stick52in any direction, the operator changes the position and the orientation of the robot1so that the tool center point70moves from the position P0to the position P1. The tool center point70moves along the movement path85. By performing the second control, the inching operation can be performed such that the hand2moves along the predetermined movement path85. For example, when the removal of burrs formed in the corner of the workpiece is performed, it may be confirmed whether or not the tool attached to the working tool moves along the shape of the corner. The operator may set a curved movement path along the shape of the corner. Then, the operator changes the position and the orientation of the working tool by a minute amount along the movement path by performing the inching operation. The operator can confirm whether or not the position and orientation of the working tool is appropriate for each minute section. Note that in the example described above, the movement path is curved, but the embodiment is not limited to this. The movement path may be linear. Next, in the third control regarding the movement direction of the working tool, the robot1can be controlled such that the hand2moves in a direction which is shifted by a predetermined angle with respect to the direction in which the stick52is operated by the operator. This predetermined angle can be set in advance by the operator and stored in the storage unit42as a preset set value48by the operator. For example, the operator can set the 30° on the positive side of the R axis as the angle that shifts the direction of operation. In the translation operation, the operator performs an operation of tilting the stick52to the positive side of the X axis of the operation coordinate system73. The manual control unit61can control the position and the orientation of the robot1so that the hand2moves in a direction which is shifted 30° on the positive side of the R axis from a direction on the positive side of the X axis of the tool coordinate system72. The processing device40may be formed so as to be able to perform the plurality of the controls regarding the movement direction of the hand described above. In this case, the processing device40can be formed so that the plurality of the controls can be switched. In the movement direction control of the present embodiment, the hand can be moved in a desired direction by operation of the operation device corresponding to the direction in which the hand moves. For this reason, the operability of manual operation is improved. (Movement Amount Control Regarding Movement Amount of Hand) Next, control regarding the movement amount by which the hand2changes in position and orientation in one inching operation will be described. In the translation operation, for example, the length at the time when the hand2(or the tool center point70) moves corresponds to the movement amount. In addition, in the rotation operation, for example, the rotation angle at which the hand2rotates corresponds to the movement amount. The movement amount can be set in various forms. The operator can set the movement amount in advance and store the movement amount as a set value48in the storage unit42. The manual control unit61changes the position and the orientation of the robot1so that the hand2moves by a predetermined movement amount when the movement direction of the hand2is determined. In a first control regarding the amount of hand movement, when performing the translation operation, the operator sets the straight-line distance from the current position P0to the moved position P1as the movement amount. That is, the straight-line distance that the tool center point70moves is set as the movement amount. In addition, when performing the rotation operation, the operator can set the rotation angle of the hand2as the movement amount. FIG.6shows a first graph that describes the movement amount in the first control. InFIG.6, the movement amount of the working tool in a translation operation is illustrated in the tool coordinate system72. That is, the distance at which the tool center point70moves from the start point position P0of the inching operation to the position P1at the end of the inching operation is illustrated. The movement amount in all directions is indicated by a sphere87. InFIG.6, the movement amount in the inching operation is set to a distance that is equal for all directions. For example, the movement amount in the movement direction indicated by the arrow94and the movement amount in the movement direction indicated by the arrow95are set to be equal. FIG.7shows a second graph that describes the movement amount in the first control. InFIG.7, the movement amount of the working tool in the translation operation is illustrated in the tool coordinate system72. In each movement direction, the movement amount can be set to be different. In the second graph, the movement amount is indicated by a quadrangular prism88. In the example illustrated inFIG.7, the movement amount is set to be large in the direction of the X axis, and the movement amount is set to be small in the Y axis direction and the Z axis direction. As a result, the movement amount in the movement direction indicated by the arrow96is smaller than the movement amount in the movement direction indicated by the arrow97. In this way, the movement amount can be set to any distance in each direction. In the examples illustrated inFIG.6andFIG.7that are described above, the translation operation is described, but a similar control for rotation operation can be performed. For example, the same movement amount (rotation angle) can be set for the W axis, the P axis, and the R axis of the tool coordinate system72. Alternatively, different amounts of movement (rotation angle) may be set for each of the W axis, the P axis, and the R axis of the tool coordinate system72. Next, in the second control regarding the movement amount of the hand, in the translation operation, the length projected on the predetermined plane is set to be the movement amount of the straight-line movement from the current position P0to the position P1where the inching operation ends. FIG.8is a schematic diagram illustrating the length when a straight-line movement path is projected on the XY plane. The movement path ofFIG.8is illustrated in a tool coordinate system72. In the example here, the XY plane is set as the predetermined plane. By performing the inching operation, the tool center point70moves from a position P0to a position P1. At this time, the operator can set a distance dxy obtained by projecting the movement path onto, for example, the XY plane including the X axis and the Y axis. The manual control unit61calculates the target position P1in which a distance is dxy in the XY plane based on the movement direction. The manual control unit61changes the position and the orientation of the robot1such that the tool center point70moves from the position P0to the position P1. Next, as illustrated inFIG.5, the third control regarding the movement amount is the control when the curved movement path85among which the hand2moves is predetermined. Regardless of the type of operation of the stick52by the operator, the tool center point70moves along the movement path85. FIG.9illustrates a diagram that describes the movement amount of the third control. In the third control, the movement is performed from the current position P0to the position P1by the inching operation. The movement amount in this case is set at a length projected on a predetermined plane. In the example here, the XY plane is set as the predetermined plane. The operator can set the length dxy projected on the XY plane as the movement amount. The manual control unit61can calculate the position P1where the projected length becomes the length dxy based on the position P0and the movement path85. The manual control unit61changes the position and the orientation of the robot1such that the tool center point70moves along the movement path85from the position P0toward the position P1. Next, as illustrated inFIG.5, the fourth control regarding the movement amount is the control when the curved movement path85among which the hand2moves is predetermined. In the fourth control, the movement length along the movement path85is set as the movement amount. FIG.10illustrates a diagram that describes the movement amount of the fourth control. The movement path85is set to be curved. The tool center point70of the robot1moves along the movement path85from the position P0to the position P1. In the fourth control, the length dxy which is the movement length along the movement path85from the position P0to the position P1, can be set as the movement amount. The manual control unit61calculates the position P1where the movement length moved along the movement path85is dxy, based on the position P0and the movement path85. When operating the operation device5, the manual control unit61changes the position and the orientation of the robot1such that the tool center point70moves along the movement path85from the position P0toward the position P1. In the movement amount control of the present embodiment, the movement amount can be set by an appropriate method corresponding to the form of moving the respective hands. For this reason, the operability of manual operation is improved. (Setting Control for Setting Movement Amount and Movement Direction) Next, a setting control for setting the movement amount and the movement direction in the inching operation will be described. In the first control of the setting control, the movement amount and movement direction of the hand2are set by operating the teach pendant37. The operator sets the movement amount by operating the input part38while viewing the display part39. Alternatively, an input device that is different from the teach pendant37can be arranged in order to set the movement amount and the movement direction. For example, the input device including a slider type handle or the input device including a rotatable knob can be connected to the teach pendant37. The operator may set the movement amount and the movement direction by tilting the slider type handle while viewing the display part39of the teach pendant37. Alternatively, the operator can set the movement amount and the movement direction by rotating the knob while viewing the display part39of the teach pendant37. In the second control to the sixth control of the setting control, the movement amount and the movement direction are set by operating the operation device5of the present embodiment. As illustrated inFIG.7, when the movement amount is changed for each of the X axis, Y axis, and Z axis of the operation coordinate system73, the movement amount can be set for each coordinate axis. The controller4is formed so as to be able to switch between the operation mode for operating the robot1and the setting mode for setting the set value. The setting mode includes a mode for setting the movement amount and a mode for setting the movement direction. When setting the movement amount, the operator switches to a mode for setting the movement amount in the setting mode. When setting the movement direction, the operator switches to a mode for setting the movement direction in the setting mode. In the second control and the third control of the setting control, the manual control unit61sets the movement amount based on the time length of operating the stick52of the operation device5. The time measuring unit62measures the time length after operating the stick52. In the second control of the setting control, the manual control unit61sets the movement amount based on the number of times that the stick52is operated within a predetermined determination value of time length. For example, when the stick52is pressed twice in the Z axis direction of the operation coordinate system73within 5 seconds, the movement amount can be set to 5 mm. In addition, when the stick52is pressed three times in the Z axis direction of the operation coordinate system73within 5 seconds, the movement amount can be set to 10 mm. In the third control of the setting control, the manual control unit61performs a control for setting the movement amount based on the time length for continuing the operation of the stick52. The determination value of the time length for continuing the operation can be predetermined. For example, when the stick52tilts on the positive side of the X axis, the manual control unit61can set the movement amount to 5 mm when the hand is released and the stick52returns to the neutral state within 1 second. In addition, when the stick52tilts on the positive side of the X axis, the movement amount can be set to 10 mm when the elapsed time until the hand is released is longer than 1 second. In the fourth control of the settings control, the manual control unit61sets the movement amount based on the magnitude of the force applied to the stick52. The force detection unit63detects a force applied to the stick52. The manual control unit61sets the movement amount in response to the magnitude of the force acquired by the force detection unit63. The determination value of the magnitude of the force can be predetermined. The determination value of the magnitude of the force can be set, for example, to 5N. When the operator operates the stick52with a force less than the determination value on the positive side in the X axis direction, the movement amount can be set to 5 mm. Furthermore, when the operator operates the stick52with a force equal to or greater than the determination value on the positive side in the X-axis direction, the movement amount can be set to 10 mm. For setting of the movement direction, the following fifth control and sixth control can be performed. In the fifth control of the setting control, performance is possible when, in a second control of the control regarding the movement direction of the working tool, the hand2is moved in a predetermined movement direction. The operator can set the movement direction by operating the stick52in a direction corresponding to the movement direction in the operation coordinate system73. In the sixth control of the setting control, similar to the setting of the movement amount, the manual control unit61can set the movement direction based on the number of times that the stick52is operated within a predetermined determination value of time length. For example, when the stick52is pressed twice in the Z axis direction of the operation coordinate system73within 5 seconds, the movement direction can be set in the positive direction of the X axis. In addition, when the stick52is pressed three times in the Z axis direction of the operation coordinate system73within 5 seconds, the movement direction can be set in the positive direction of the X axis. In addition, in the third control of the control regarding the movement direction of the working tool, the shifted angle can be set based on the number of times the stick52is operated, even when the angle shifted with respect to the direction in which the stick52is operated is set. The control for setting the movement amount and the movement direction can be combined with the control for the setting described above. The operator switches the controller4to the setting mode for setting the set value. The movement direction can be set by a first time operation to the stick52. The movement amount can be set by a second time operation to the stick52. For example, the direction in which the stick52is operated on the first time can be set to the movement direction in which the tool center point70moves. In the operation for the second time, the movement amount can be set based on the time length that the stick52is pushed in the Z axis direction. For example, when the time length of pushing the stick52is within 1 second, the movement amount can be set to 5 mm. Alternatively, when the time length of pushing the stick52is longer than 1 second, the movement amount can be set to 10 mm. In this way, the movement amount and the movement direction can be set continuously. Note that the setting of the movement amount and the movement direction is not limited to the above-described embodiment. For example, the movement amount and the movement direction set by a device different from the teach pendant may be input to the processing device via a communication device. In the second control to the sixth control of the setting control of the present embodiment, the movement amount and the movement direction can be set by the operation of the operation device. For this reason, the operability of manual operation is improved. (Permission Control for Permitting Operation of Robot) The operation device5of the present embodiment can be provided with the function of an enable switch. The enable switch is a switch that permits the robot to operate by pressing. For example, it can be controlled so that the operation of the robot is performed during the term in which the enable switch is pressed. Alternatively, the control can be performed that, when the enable switch is pushed once, the operation of the robot is permitted, and when the switch is pushed once again, the operation of the robot is prohibited. In a first control that permits operation of the robot1of the present embodiment, the manual control unit61determines whether the robot1is permitted or prohibited based on the elapsed time from the time when the stick52is operated. Immediately after the operator turns on the power of the controller4, the manual control unit61prohibits the operation of the robot1. In this state, the operator operates the stick52. The time measuring unit62measures the elapsed time from the time when the stick52is operated. When the elapsed time from the time when the operator operates the stick52to the time when the hand is released (until the stick52returns to the neutral state) is less than the predetermined determination value of the time length, the manual control unit61performs a control that permits the operation of the robot1. On the other hand, when the time elapsed since the operator operates the stick52until the hand is released is equal to or greater than the predetermined determination value of the time length, the manual control unit61performs a control that prohibits the operation of the robot1. The determination value of the time length may set a time of, for example, 1 second or less. That is, when the operator operates the stick52in a short period of time, the manual control unit61permits the operation of the robot1. For example, before the operation for operating the robot1is performed, by pressing the stick52in the Z axis direction of the operation coordinate system73for a short period of time, the manual control unit61performs a control that permits the operation of the robot1. Alternatively, when the time elapsed since the operator operates the stick52until the hand is released is greater than the predetermined determination value of the time length, the manual control unit61may perform control that permits operation of the robot1. That is, when the operator operates the stick52for a long period of time, the manual control unit61can permit the operation of the robot1. As the determination value of the time length at this time, for example, a time length of 1 second or less can be set. In the second control that permits the robot1to operate, the manual control unit61determines whether the robot1is permitted or prohibited based on the magnitude of the force with which the stick52is operated. In a state in which the manual control unit61prohibits operation of the robot1, the operator operates the stick52. The force detection unit63detects the magnitude of the force applied to the stick52. When the magnitude of the force that the stick52is operated with is greater than a predetermined determination value, the manual control unit61performs a control that permits the operation of the robot1. The determination value of the magnitude of the force can be set to a value of, for example, 5N or less. More preferably, the determination value of the magnitude of the force can be set to a value of equal to or less than 2N. In this way, when the operator operates the stick52with a strong force, the manual control unit61performs a control that permits the operation of the robot1. For example, when the operator presses the stick52to the deepest position, the manual control unit61performs a control that permits the operation of the robot1. When the magnitude of the force with which the stick52is operated is equal to or less than the predetermined determination value, the manual control unit61performs a control that prohibits the operation of the robot1. Alternatively, when the magnitude of the force with which the stick52is operated is less than a predetermined determination value, the manual control unit61may perform control that permits the operation of the robot1. That is, the control that, when the operator operates the stick52with a weak force, the operation of the robot1is permitted, and when the operator operates the stick52with a strong force, the operation of the robot1is prohibited, may be performed. In this case, a determination value of the force can be set to a value, for example, not greater than 5N. More preferably, the determination value of the magnitude of the force can be set to a value of equal to or less than 2N. In this manner, the manual control unit61can switch between a state in which the operation of the robot1is permitted and a state in which the operation of the robot1is prohibited based on the time length in which the operator operates the stick or the magnitude of the force with which the operator operates the stick. In the state that the operation of the robot1is permitted, the manual control unit61drives the robot1in response to the operation of the stick52of the operator. On the other hand, in a state that the operation of the robot1is prohibited, the manual control unit61does not drive the robot1even when the operator operates the stick52. After the operation of the robot1is permitted, the operator can drive the robot1. Here, the time measuring unit62can measure the time length during which the operator does not operate the stick52. When the time length during which the operator does not operate the stick52exceeds the predetermined determination value of the time length, the manual control unit61can perform control that prohibits the operation of the robot1. That is, when the time during which the operator does not operate the stick52is long, the manual control unit61can perform control that prohibits the operation of the robot1. Alternatively, the manual control unit61can perform control that prohibits the operation of the robot1by operating the stick52. For example, when the operator performs an operation for twisting the stick52in the R axis direction, the manual control unit61can perform control that prohibits the operation of the robot1. When the operator resumes the operation of the robot1, the operator can perform the operation of the stick52described above so that the manual control unit61permits the operation of the robot1. In the permission control of the present embodiment, a state in which the operation of the robot is permitted and a state in which the operation of the robot is prohibited can be switched by the operation device. For this reason, the operability of manual operation of the robot1is improved. In addition, when the controller4is activated, immediately after performing the operation of the stick52so as to permit the operation of the robot1, the type of control can be selected by the operation of the stick52, such as the determining control for determining whether the inching operation described above is performed, or the movement direction control regarding the movement direction of the working tool. For example, immediately after the operation of the stick52is performed so as to permit the operation of the robot1, when the stick52is pushed in the Z axis direction and tilted to the positive side of the X axis, the first control among the plurality of determination controls can be set so as to be performed. Alternatively, when the stick52is tilted on the negative side of the X axis while being pushed in the Z axis direction, the second control can be set so as to be performed. Alternatively, in a case where the stick52is pushed in the Z axis direction and is tilted on the positive side of the Y-axis, the third control can be set so as to be performed. In this way, a plurality of controls that are predetermined may be selected and performed by a continuous operation of the stick52. According to one aspect of the present disclosure, it is possible to provide a controller of a robot for performing manual operation with excellent operability. The above embodiments can be combined as appropriate. In each of the above figures, the same or the like portions are denoted by the same reference numerals. Note that the above-described embodiments are merely examples and are not intended to limit the invention. Further, in the embodiments, modifications of the embodiment described in the claims are included. | 59,807 |
11858131 | DETAILED DESCRIPTION An embodiment of the present invention will be described below with reference to the accompanying drawings. Schematic Structure of Industrial Robot FIG.1is an explanatory plan view showing a schematic structure of an industrial robot1on which teaching is performed by a teaching method for an industrial robot in accordance with an embodiment of the present invention.FIG.2is a plan view showing the hand11inFIG.1.FIGS.3A and3Bare explanatory side views showing a structure of the hand11shown inFIG.1. An industrial robot1on which teaching is performed by a teaching method for an industrial robot in this embodiment is a horizontal multi joint type robot for conveying a semiconductor wafer2which is a conveyance object. The semiconductor wafer2is formed in a thin circular plate shape. The industrial robot1is incorporated and used in a semiconductor manufacturing system3. In the following descriptions, the industrial robot1is referred to as a “robot1”, and the semiconductor wafer2is referred to as a “wafer2”. The semiconductor manufacturing system3includes, for example, an EFEM (Equipment Front End Module)4and wafer processing apparatuses5for performing predetermined processing on a wafer2. The robot1structures a part of the EFEM4. Further, the EFEM4includes, for example, a plurality of load ports7for opening and closing an FOUP (Front Open Unified Pod)6in which wafers2are accommodated, and a housing8in which the robot1is accommodated. The FOUP6is capable of accommodating a plurality of wafers2in a state that they are separated from each other at a constant interval in an upper and lower direction and, in addition, in a state that they are overlapped with each other in the upper and lower direction. The robot1conveys a wafer2between the FOUP6and the wafer processing apparatus5. For example, the robot1carries out a wafer2from the FOUP6, and the wafer2having been carried out is carried into the wafer processing apparatus5. Further, the robot1carries out the wafer2from the wafer processing apparatus5, and the wafer2having been carried out is carried into the FOUP6. The FOUP6in this embodiment is an accommodation part in which a wafer2that is a conveyance object is accommodated. The robot1includes a hand11on which a wafer2is mounted, an arm12whose tip end side is turnably connected with the hand11and which is operated in a horizontal direction, and a main body part13with which a base end side of the arm12is turnably connected. The arm12is structured of a first arm part15whose base end side is turnably connected with the main body part13, a second arm part16whose base end side is turnably connected with a tip end side of the first arm part15, and a third arm part17whose base end side is turnably connected with a tip end side of the second arm part16. The main body part13, the first arm part15, the second arm part16and the third arm part17are disposed in this order from a lower side in the upper and lower direction. The main body part13includes a lifting mechanism structured to lift and lower the arm12. Further, the robot1includes an arm part drive mechanism structured to turn the first arm part15and the second arm part16to extend and contract a part of the arm12comprised of the first arm part15and the second arm part16, a third arm part drive mechanism structured to turn the third arm part17, and a hand drive mechanism structured to turn the hand11. The hand11is formed in a substantially “Y”-shape when viewed in the upper and lower direction. The hand11is provided with a connecting part19which is connected with a tip end side of the third arm part17and a wafer mounting part20on which a wafer2is mounted. The hand11is disposed on an upper side of the third arm part17. The wafer mounting part20is formed in a flat plate shape and a wafer2is mounted on an upper face of the wafer mounting part20. An upper face of the wafer mounting part20is formed with a suction hole (not shown) for sucking and holding the wafer2. In other words, the hand11is provided with a suction type holding part for holding the wafer2mounted on the wafer mounting part20. The hand11is not provided with a positioning mechanism, which is structured to position the wafer2in a horizontal direction so that the wafer2is mounted at a normal mounting position of the wafer mounting part20. The hand11is provided with a protruded part21in a pin shape which is fixed to the wafer mounting part20. In this embodiment, two protruded parts21are fixed to the wafer mounting part20. The protruded part21is formed in a columnar shape. The protruded part21is protruded from an upper face of the wafer mounting part20to an upper side. A length of the protruded part21(length in the upper and lower direction) is set to be longer than a thickness of the wafer2. The protruded part21is fixed to a base end side (connecting part19side) of the wafer mounting part20. The two protruded parts21are disposed at the same position as each other in a longitudinal direction (“X” direction inFIG.2) of the hand11whose shape is a substantially “Y”-shape when viewed in the upper and lower direction and, in addition, the two protruded parts21are disposed in a separated state from each other in a direction (“Y” direction inFIG.2) perpendicular to the longitudinal direction of the hand11and the upper and lower direction. Further, the protruded parts21are disposed at positions where the protruded parts21are not contacted with the wafer2mounted at a normal mounting position of the wafer mounting part20. In a case that a wafer2is accommodated on a front side with respect to the normal accommodated position in the FOUP6and the wafer2is displaced to a front side with respect to the normal accommodated position (seeFIG.3A), the protruded parts21function to contact with a front end face of the wafer2and correct a position of the wafer2to be mounted on the hand11before the wafer2displaced to the front side is mounted on the hand11(seeFIG.3B). In other words, when a wafer2accommodated to a front side with respect to the normal accommodated position is to be carried out from the FOUP6at a time of actual use of the robot1, the protruded parts21contact with a front end face of the wafer2and correct the position of the wafer2to be mounted on the hand11. Specifically, in a case that a wafer2has been displaced to a front side from the normal accommodated position in the FOUP6, the protruded part21contacts with a front end face of the wafer2to correct the position of the wafer2so that the wafer2is mounted on the wafer mounting part20within a predetermined allowable range with respect to the wafer mounting part20. Further, in a case that a wafer2has been displaced to a front side from the normal accommodated position in the FOUP6, the wafer2is mounted on the hand11after the position of the wafer2is corrected by the protruded part21. In this embodiment, the FOUP6includes a restriction member which restricts a position of a wafer2accommodated in the FOUP6on a rear side of the FOUP6and thus, the wafer2is not displaced to a rear side from the normal accommodated position in the FOUP6. However, the wafer2may be displaced to a front side from the normal accommodated position in the FOUP6due to vibrations when the FOUP6is opened by the load port7and the like. Teaching Method for Industrial Robot FIG.4is an explanatory plan view showing a structure of a positioning member25which is used when teaching for the robot1shown inFIG.1is performed. When teaching for the robot1is to be performed, a positioning member25is used for positioning a wafer2to be mounted on the hand11in the horizontal direction. The positioning member25is formed in a ring shape. Specifically, the positioning member25is formed in a circular ring shape. Further, the positioning member25is formed in a cylindrical tube shape whose length in an axial direction is short and which is flat and thick. The positioning member25is attached to an upper face side of the hand11. Specifically, the positioning member25is attached to the protruded part21by fitting an inner peripheral side of the positioning member25to the protruded part21. In other words, the positioning member25is attached to an upper face side of the wafer mounting part20. Further, two positioning members25are attached to the upper face side of the wafer mounting part20. A length in the axial direction (length in the upper and lower direction) of the positioning member25is set to be substantially equal to a length in the upper and lower direction of the protruded part21and is larger than a thickness of the wafer2. In this embodiment, when a wafer2is mounted on the wafer mounting part20in a state that an end face of the wafer2is contacted with the two positioning members25(in other words, when a wafer2is mounted on the hand11in a state that an end face of the wafer2is contacted with the two positioning members25), the wafer2is disposed at a normal mounting position on the hand11(seeFIG.4). The teaching method for the robot1in this embodiment includes a positioning member attaching step in which the positioning member25is attached to the upper face side of the hand11and, after that, a teaching step in which the hand11is moved to a predetermined position and teaching for the robot1is performed and, after that, a positioning member detaching step in which the positioning member25is detached from the hand11. In the positioning member attaching step, two positioning members25are attached to the upper face side of the hand11. Further, in the positioning member attaching step, an inner peripheral side of the positioning member25is fitted to the protruded part21and thereby, the positioning member25is attached to the upper face side of the hand11. In the positioning member detaching step, the two positioning members25are detached from the upper face side of the hand11. In the teaching step, for example, in a state that the wafer2has been mounted on the hand11so that an end face of the wafer2is contacted with the two positioning members25, the hand11is moved to a position where the wafer2is placed at a normal position in an inside of the wafer processing apparatus5to teach a position of the hand11in the wafer processing apparatus5. Further, in the teaching step, for example, in a state that the wafer2has been mounted on the hand11so that an end face of the wafer2is contacted with the two positioning members25, the hand11is moved to a position where the wafer2is placed at a normal accommodated position in an inside of the FOUP6to teach a position of the hand11in the FOUP6. In this case, for example, in a state that a wafer2is mounted on the hand11so that an end face of the wafer2is contacted with the two positioning members25, the hand11is moved to the wafer processing apparatus5or the FOUP6and the wafer2is placed in an inside of the wafer processing apparatus5or in an inside of the FOUP6and then, a deviation amount between a position of the wafer2having been placed and a normal position for the wafer2in an inside of the wafer processing apparatus5or the FOUP6is calculated and then, while correcting the position of the hand11, finally, the hand11is moved to a position where the wafer2is placed at the normal position in the inside of the wafer processing apparatus5or the FOUP6. Further, in the teaching step, for example, the hand11in a state that no wafer2has been mounted is moved to a position where a wafer2disposed at a normal position in an inside of the wafer processing apparatus5is contacted with two positioning members25to teach a position of the hand11in the wafer processing apparatus5. Further, in the teaching step, for example, the hand11in a state that no wafer2has been mounted is moved to a position where a wafer2disposed at a normal position in an inside of the FOUP6is contacted with two positioning members25to teach a position of the hand11in the FOUP6. In this case, for example, a hand11in a state that no wafer2has been mounted is moved to the wafer processing apparatus5or the FOUP6and, after a wafer2having been disposed at the normal position in an inside of the wafer processing apparatus5or the FOUP6is mounted on the hand11, a deviation amount between an end face of the wafer2having been mounted and the two positioning members25is calculated and then, while correcting the position of the hand11, finally, the hand11in a state that no wafer2has been mounted is moved to the position where the wafer2disposed at the normal position in the inside of the wafer processing apparatus5or the FOUP6is contacted with the two positioning members25. Principal Effects in this Embodiment As described above, in this embodiment, in the positioning member attaching step before the teaching step, the positioning member25is attached to the hand11. Further, in this embodiment, when a wafer2is mounted on the hand11in a state that an end face of the wafer2is contacted with the positioning member25, the wafer2is disposed at the normal mounting position on the hand11. Therefore, in this embodiment, even in a case that the hand11is not provided with an edge grip type holding part, since a wafer2is mounted on the hand11so that an end face of the wafer2is contacted with the positioning member25, when teaching for the robot1is to be performed, the wafer2can be mounted on the hand11in a short time so that the wafer2is disposed at the normal mounting position on the hand11. In this embodiment, the positioning member25is attached to the protruded part21by fitting an inner peripheral side of the positioning member25to the protruded part21. In other words, in this embodiment, the positioning member25is attached to an upper face side of the hand11by using the protruded part21which corrects the position of the wafer2at a time of actual use of the robot1. Therefore, in this embodiment, it is not required to separately provide a member for attaching the positioning member25to the hand11. Accordingly, in this embodiment, a structure of the hand11is simplified and a cost of the hand11can be reduced. Further, in this embodiment, when an inner peripheral side of the positioning member25is fitted to the protruded part21, the positioning member25can be attached to the upper face side of the hand11and thus, the positioning member25can be easily attached to and detached from the hand11in a short time. Further, in this embodiment, the protruded part21is formed in a columnar shape and thus, the protruded part21can be easily manufactured and the positioning member25can be easily manufactured. Other Embodiments Although the present invention has been shown and described with reference to a specific embodiment, various changes and modifications will be apparent to those skilled in the art from the teachings herein. In the embodiment described above, the protruded part21may be formed in a polygonal prism shape such as a quadrangular prism shape. In this case, an inner peripheral face of the positioning member25is formed in a polygonal prism face shape corresponding to a shape of the protruded part21. Further, in the embodiment described above, the positioning member25may be attached to a member other than the protruded part21. In other words, the hand11may be provided with a member for attaching the positioning member25in addition to the protruded part21. In this case, the positioning member25is not required to be formed in a ring shape. In the embodiment described above, when a position in a horizontal direction of a wafer2mounted on the hand11can be determined, the number of a positioning member attached to the upper face side of the hand11may be one. In this case, contact parts which are contacted with a wafer2in two directions are formed in the positioning member. Further, in the embodiment described above, when a position in a horizontal direction of a wafer2mounted on the hand11can be determined, the number of a positioning member attached to an upper face side of the hand11may be three or more. In the embodiment described above, the hand11may be provided with no holding part for holding a wafer2which is mounted on the wafer mounting part20. In this case, for example, a support member which supports a wafer2from a lower side is attached to an upper face side of the hand11. The support member is, for example, formed of rubber. In this hand11, a wafer2is merely placed on the support member, and the wafer2mounted on the hand11is not held by the hand11. Further, in this hand11, a wafer2mounted on the hand11is not positioned in a horizontal direction. In the embodiment described above, the robot1may be provided with two hands11which are turnably connected with a tip end side of the arm12. Further, in the embodiment described above, the arm12may be structured of two arm parts and may be structured of four or more arm parts. Further, in the embodiment described above, a conveyance object which is to be conveyed by the robot1may be an object other than a wafer2. In this case, for example, a conveyance object may be formed in a square or a rectangular flat plate shape. In addition, an industrial robot to which one or more of the embodiments of the present invention are applied may be a robot other than a horizontal multi joint type industrial robot. For example, an industrial robot to which one or more of the embodiments of the present invention are applied may be an industrial robot having a linear drive part structured to linearly reciprocate the hand11. While the description above refers to particular embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. | 18,173 |
11858132 | DESCRIPTION OF EXEMPLARY EMBODIMENTS First Embodiment A first embodiment is described with reference toFIGS.1to5.FIG.1shows an encoder device EC in accordance with the present embodiment. InFIG.1, the encoder device EC detects rotational position information of a rotary shaft SF (moving part) of a motor M (power supplying unit). The rotary shaft SF is, for example, a shaft (rotor) of the motor M, but may also be an operation shaft (output shaft) connected to the shaft of the motor M via a power transmission unit such as a transmission and also connected to a load. The rotational position information detected by the encoder device EC is supplied to a motor control unit MC. The motor control unit MC controls rotation (for example, a rotational position, a rotating speed and the like) of the motor M by using the rotational position information supplied from the encoder device EC. The motor control unit MC controls the rotation of the rotary shaft SF. The encoder device EC comprises a position detection system (position detection unit)1and an electric power supplying system (electric power supplying unit)2. The position detection system1detects the rotational position information of the rotary shaft SF. The encoder device EC is a so-called multi-turn absolute encoder, and detects the rotational position information including multi-turn information indicative of the number of rotations of the rotary shaft SF and angular position information indicative of an angular position (rotating angle) less than one-turn. The encoder device EC comprises a multi-turn information detection unit3that detects the multi-turn information of the rotary shaft SF, and an angle detection unit4that detects the angular position of the rotary shaft SF. At least a part (for example, the angle detection unit4) of the position detection system1operates by receiving electric power supply from a device (for example, a drive device, a stage device, a robot device) on which the encoder device EC is mounted, in a state where a power supply (for example, a main power supply) of the device is on, for example. Also, at least a part (for example, the multi-turn information detection unit3) of the position detection system1operates by receiving electric power supply from the electric power supplying system2, in a state (for example, an emergency state and a backup state) where a power supply (for example, a main power supply) of a device on which the encoder device EC is mounted is off. For example, in a state where the supply of electric power from a device on which the encoder device EC is mounted is cut off, the electric power supplying system2intermittently supplies electric power to at least a part (for example, the multi-turn information detection unit3) of the position detection system1, and the position detection system1detects at least a part (for example, the multi-turn information) of the rotational position information of the rotary shaft SF at the time when electric power is supplied from the electric power supplying system2. The multi-turn information detection unit3detects the multi-turn information by magnetism, for example. The multi-turn information detection unit3includes, for example, a magnet11, magnetism detection units12, a detection unit13, and a storage unit14. The magnet11is provided on a disc15fixed to the rotary shaft SF. Since the disc15rotates together with the rotary shaft SF, the magnet11rotates in conjunction with the rotary shaft SF. The magnet11is fixed to the outside of the rotary shaft SF, and mutual relative positions of the magnet11and the magnetism detection units12are changed due to the rotation of the rotary shaft SF. The strength and direction of a magnetic field on the magnetism detection unit12formed by the magnet11are changed by the rotation of the rotary shaft SF. The magnetism detection unit12detects a magnetic field that is formed by the magnet, and the detection unit13detects the position information of the rotary shaft SF, based on a detection result of the magnetism detection unit12detecting the magnetic field that is formed by the magnet11. The storage unit14stores the position information detected by the detection unit13. The angle detection unit4is an optic or magnetic encoder, and detects position information (angular position information) within one-turn of a scale. For example, in a case of the optic encoder, the optic encoder detects the angular position within one-turn of the rotary shaft SF by reading patterning information of the scale with a light-receiving element, for example. The patterning information of the scale is, for example, bright and dark slits on the scale. The angle detection unit4detects the angular position information of the rotary shaft SF that is the same as a detection target of the multi-turn information detection unit3. The angle detection unit4includes a light-emitting element21, a scale S, a light-receiving sensor22, and a detection unit23. The scale S is provided on a disc5fixed to the rotary shaft SF. The scale S includes an incremental scale and an absolute scale. The scale S may also be provided on the disc15or may be a member integrated with the disc15. For example, the scale S may be provided on an opposite surface of the disc15to the magnet11. The scale S may be provided on at least one of the inside and the outside of the magnet11. The light-emitting element21(an irradiation unit, a light-emitting unit) irradiates the scale S with light. The light-receiving sensor22(a light detection unit) detects light emitted from the light-emitting element21and passing through the scale S. InFIG.1, the angle detection unit4is a transmission type, and the light-receiving sensor22detects light having passed through the scale S. Note that the angle detection unit4may also be a reflection type. The light-receiving sensor22supplies a signal indicative of a detection result to the detection unit23. The detection unit23detects the angular position of the rotary shaft SF by using the detection result of the light-receiving sensor22. For example, the detection unit23detects an angular position of a first resolution by using a detection result of light from the absolute scale. Also, the detection unit23detects an angular position of a second resolution higher than the first resolution by performing interpolation calculation on the angular position of the first resolution by using a detection result of light from the incremental scale. In the present embodiment, the encoder device EC comprises a signal processing unit25. The signal processing unit25calculates and processes a detection result of the position detection system1. The signal processing unit25includes a synthesis unit26and an external communication unit27. The synthesis unit26acquires the angular position information of the second resolution detected by the detection unit23. Also, the synthesis unit26acquires the multi-turn information of the rotary shaft SF from the storage unit14of the multi-turn information detection unit3. The synthesis unit26synthesizes the angular position information from the detection unit23and the multi-turn information from the multi-turn information detection unit3, and calculates the rotational position information. For example, when the detection result of the detection unit23is (Arad) and the detection result of the multi-turn information detection unit3is n-turn, the synthesis unit26calculates (2π×n+θ)(rad), as the rotational position information. The rotational position information may also be information in which the multi-turn information and the angular position information less than one-turn are combined. The synthesis unit26supplies the rotational position information to the external communication unit27. The external communication unit27is communicatively connected to a communication unit MCC of the motor control unit MC in a wired or wireless manner. The external communication unit27supplies the rotational position information of a digital format to the communication unit MCC of the motor control unit MC. The motor control unit MC appropriately decodes the rotational position information from the external communication unit27of the angle detection unit4. The motor control unit MC controls the rotation of the motor M by controlling electric power (drive electric power) supplied to the motor M by using the rotational position information. The electric power supplying system2includes first and second electric signal generation units31A and31B, a battery32, and a switching unit33. The electric signal generation units31A and31B each generate an electric signal by the rotation of the rotary shaft SF. The electric signal includes a waveform where electric power (current, voltage) changes over time, for example. The electric signal generation units31A and31B each generate electric power as the electric signal by a magnetic field that changes based on the rotation of the rotary shaft SF, for example. For example, the electric signal generation units31A and31B generate electric power by a change in magnetic field that is formed by the magnet11that is used for the multi-turn information detection unit3to detect the multi-turn position of the rotary shaft SF. The electric signal generation units31A and31B are each disposed so that a relative angular position to the magnet11is changed by the rotation of the rotary shaft SF. The electric signal generation units31A and31B generate a pulsed electric signal when relative positions of the electric signal generation units31A and31B and the magnet11each reach predetermined positions, for example. The battery32supplies at least a part of electric power that is consumed in the position detection system1, based on the electric signals generated from the electric signal generation units31A and31B. The battery32includes, for example, a primary battery36such as a button-shaped battery and a dry-cell battery, and a rechargeable secondary battery37(seeFIG.4). The secondary battery of the battery32can be recharged by the electric signals (for example, current) generated from the electric signal generation units31A and31B, for example. The battery32is held in a holder35. The holder35is, for example, a circuit substrate or the like on which at least a part of the position detection system1is provided. The holder35holds the detection unit13, the switching unit33, and the storage unit14, for example. The holder35is provided with a plurality of battery cases capable of accommodating the battery32, and an electrode, a wire and the like connected to the battery32, for example. The switching unit33switches whether to supply electric power from the battery32to the position detection system1, based on the electric signals generated from the electric signal generation units31A and31B. For example, the switching unit33starts supply of electric power from the battery32to the position detection system1when levels of the electric signals generated from the electric signal generation units31A and31B become equal to or higher than a threshold value. For example, the switching unit33starts supplying electric power from the battery32to the position detection system1when electric power equal to or higher than the threshold value is generated from the electric signal generation units31A and31B. Also, the switching unit33stops supplying electric power from the battery32to the position detection system1when the levels of the electric signals generated from the electric signal generation units31A and31B become lower than the threshold value. For example, the switching unit33stops the supply of electric power from the battery32to the position detection system1when the electric power generated from the electric signal generation units31A and31B becomes lower than the threshold value. For example, in a case where a pulsed electric signal is generated in the electric signal generation units31A and31B, the switching unit33starts the supply of electric power from the battery32to the position detection system1at the time when a level (electric power) of the electric signal rises from a low level to a high level, and stops the supply of electric power from the battery32to the position detection system1after a predetermined time elapses since the level (electric power) of the electric signal changes to the low level. Also, the encoder device EC has a configuration of using the electric signals (pulse signals) generated from the electric signal generation units31A and31B, as a switching signal (trigger signal) for the supply of electric power from the battery32to the position detection system1. FIG.2Ais a perspective view showing the magnet11, the electric signal generation units31A and31B, and two magnetic sensors51and52that are the magnetism detection units12inFIG.1,FIG.2Bis a plan view of the magnet11and the like inFIG.2A, as seen from a direction parallel to the rotary shaft SF, andFIG.2Cis a circuit diagram of the magnetic sensor51. Note that inFIG.2Aand the like, the rotary shaft SF ofFIG.1is shown with a straight line. InFIGS.2A and2B, the magnet11is configured so that rotation changes the direction and strength of the magnetic field in an axial direction that is a direction parallel to a straight line (symmetrical axis) passing through a center of the rotary shaft SF. The magnet11is an annular member that is coaxial with the rotary shaft SF, for example. As an example, the magnet11is configured by a first annular magnet consisting of an N pole16A, an S pole16B, an N pole16C, and an S pole16D, which are sequentially disposed so as to surround the rotary shaft SF and each have an opening angle of 90° and a fan shape, and a second annular magnet consisting of an S pole17A, an N pole17B, an S pole17C, and an N pole17D, which have the same shapes as the N pole16A to the S pole16D and are each attached and disposed to one surface of the N pole16A to the S pole16D. The magnet11is a permanent magnet that is magnetized to have four pairs of polarities along a circumferential direction (or also referred to as a rotating direction) around the rotary shaft SF and generates a magnetic force. A front surface (a surface opposite to the motor M inFIG.1) and a back surface (a surface on the same side as the motor M), which are main surfaces of the magnet11, are each substantially perpendicular to the rotary shaft SF. In other words, in the magnet11, the N pole16A to the S pole16D on the front surface-side and the S pole17A to the N pole17D on the back surface-side are offset by 90° in angle (for example, in positions of the respective N poles and the S poles) (180° in phase), and boundaries between the N poles and the S poles of the N pole16A to the S pole16D and boundaries between the S poles and the N poles of the S pole17A to the N pole17D substantially coincide with each other with respect to positions in the circumferential direction (angular positions). Note that the first annular magnet and the second annular magnet may be one magnet integrated continuously in the moving direction (for example, the circumferential direction, the rotating direction) or the axial direction and having a plurality of polarities or may be a hollow magnet having a space at the inside of these magnets. Herein, for convenience of descriptions, rotation in a counterclockwise direction is referred to as forward rotation, and rotation in a clockwise direction is referred to as reverse rotation, as seen from a tip end side of the rotary shaft SF (an opposite side to the motor M inFIG.1). Also, an angle of the forward rotation is indicated by a positive value, and an angle of the reverse rotation is indicated by a negative value. Note that rotation in a counterclockwise direction may be referred to as forward rotation, and rotation in a clockwise direction may be referred to as reverse rotation, as seen from a rear end side of the rotary shaft SF (the motor M-side inFIG.1). Herein, in a coordinate system fixed to the magnet11, an angular position of a boundary between the S pole16D and the N pole16A in the circumferential direction is denoted as a position11a, and angular positions (boundaries between the N pole and the S pole) sequentially rotated by 90° from the position11aare each denoted as positions11b,11cand11d. In a first section from the position11ato the position 90° counterclockwise, the N pole is disposed on the front surface-side of the magnet11, and the S pole is disposed on the back surface-side of the magnet11. In the first section, a direction of the magnetic field of the magnet11in the axial direction is substantially parallel to an axial direction AD1(seeFIG.3C) from the front surface-side toward the back surface-side of the magnet11. In the first section, the strength of the magnetic field is maximized in the middle of the position11aand the position11b, and is minimized near the positions11aand11b. In a second section of 90° in the counterclockwise direction from the position11b(a section in which the S pole is disposed on the front surface-side of the magnet11, and the N pole is disposed on the back surface-side of the magnet11), a direction of the magnetic field of the magnet11in the axial direction is substantially a direction from the back surface-side toward the front surface-side of the magnet11(for example, an opposite direction to the axial direction AD1(FIG.3C)). In the second section, the strength of the magnetic field is maximized in the middle of the position11band the position11c, and is minimized near the positions11band11c. Similarly, in a third section from the position11cto the position 90° counterclockwise and in a fourth section from the position11dto the position 90° counterclockwise, directions of the magnetic fields of the magnet11in the axial direction are substantially a direction from the front surface-side toward the back surface-side of the magnet11and a direction from the back surface-side toward the front surface-side of the magnet11, respectively. As such, the directions of the magnetic field formed by the magnet11in the axial direction are sequentially reversed at the positions11ato11d. The magnet11forms an AC magnetic field in which the direction of the magnetic field in the axial direction is reversed with the rotation of the magnet11, with respect to the coordinate system fixed to the outside of the magnet11. The electric signal generation units31A and31B are disposed on the outer surface of the magnet11in a direction intersecting with a normal direction of the main surfaces of the magnet11. In the present embodiment, the electric signal generation units31A and31B are provided without contacting the magnet11with each going away in a diametrical direction (for example, a radial direction) of the magnet11orthogonal to the rotary shaft SF or in a direction parallel to the diametrical direction. The first electric signal generation unit31A includes a first magnetosensitive part41A, a first electric power generation part42A, and a first set of first magnetic body45A and a first set of second magnetic body46A. Note that one of the first magnetic body45A and the second magnetic body46A may be omitted. The first magnetosensitive part41A, the first electric power generation part42A, the first magnetic body45A, and the second magnetic body46A are fixed to the outside of the magnet11, and relative positions thereof to each position on the magnet11are changed with the rotation of the magnet11. For example, inFIG.2B, the position11bof the magnet11is disposed at a position 45° in the counterclockwise from the first electric signal generation unit31A. When the magnet11rotates one-turn in the forward direction (counterclockwise) from this state, the positions11a,11d,11cand11bsequentially pass near the electric signal generation unit31A. The first magnetosensitive part41A is a magnetosensitive wire such as a Wiegand wire. In the first magnetosensitive part41A, large Barkhausen jump (Wiegand effect) is generated by the change in magnetic field associated with the rotation of the magnet11. The first magnetosensitive part41A is a cylindrical member whose projection image is rectangular, and an axial direction thereof is set in the circumferential direction of the magnet11. Hereinafter, the axial direction of the first magnetosensitive part41A, i.e., a direction perpendicular to a circular (or which may be polygonal or the like) cross-section of the first magnetosensitive part41A is referred to as the length direction of the first magnetosensitive part41A. Also, for example, a length of the magnetosensitive part in the direction (the axial direction, the length direction, the longitudinal direction) perpendicular to the cross-section of the magnetosensitive part (for example, the first magnetosensitive part41A) is set to be longer than a length of the magnetosensitive part in a direction (width direction) parallel to the cross-section of the magnetosensitive part. When the AC magnetic field is applied in the axial direction (length direction) of the first magnetosensitive part41A and the AC magnetic field is reversed, the first magnetosensitive part41A generates a magnetic domain wall from one end toward the other end in the axial direction. As such, the length direction (axial direction) of the magnetosensitive part (for example, the first magnetosensitive part41A and the like) of the present embodiment is also referred to as an easy magnetization direction that is a direction in which magnetization is easily oriented. The first and second magnetic bodies45A and46A are formed of a ferromagnetic material such as iron, cobalt, nickel, for example. The first and second magnetic bodies45A and46A can also be referred to as yokes. The first magnetic body45A is provided between the front surface of the magnet11and one end of the first magnetosensitive part41A, and the second magnetic body46A is provided between the back surface of the magnet11and the other end of the first magnetosensitive part41A. Tip end portions of the first and second magnetic bodies45A and46A are disposed at the same angular position in the circumferential direction on the front surface and back surface of the magnet11. The polarities of the magnet11are always opposite to each other at the tip end portions of the first and second magnetic bodies45A and46A, and when the tip end portion of the first magnetic body45A is positioned near the N pole16A (or the S pole16B), the tip end portion of the second magnetic body46A is located near the S pole17A (or the N pole17B). For this reason, the first and second magnetic bodies45A and46A guide magnetic field lines from the two parts of the magnet11(for example, the N pole16A and the S pole17A), which are located at the same position in the circumferential direction of the magnet11and have polarities different from each other, to the length direction of the first magnetosensitive part41A. By the magnet11, the first magnetic body45A, the first magnetosensitive part41A, and the second magnetic body46A, a magnetic circuit MC1(seeFIG.3A) including the magnetic field lines toward the length direction of the first magnetosensitive part41A is formed. Note that a peripheral edge portion of the disc15ofFIG.1is provided with a step (not shown), so that a space into which the second magnetic body46A can be inserted is secured between the peripheral edge portion of the disc15and the back surface the magnet11. The first electric power generation part42A is, for example, a high-density coil wound and disposed on the first magnetosensitive part41A. In the first electric power generation part42A, electromagnetic induction is generated due to the generation of the magnetic domain wall in the first magnetosensitive part41A, so that an induction current flows. When the positions11ato11dof the magnet11shown inFIG.2Bpass near the electric signal generation unit31A (the tip end portions of the magnetic bodies45A and46A), a pulsed current (electric signal, electric power) is generated in the first electric power generation part42A. A direction of the current generated in the first electric power generation part42A is changed in accordance with the direction of the magnetic field before and after the reversal. For example, a direction of the current that is generated upon the reversal from the magnetic field toward the front surface-side to the magnetic field toward the back surface-side of the magnet11is opposite to a direction of the current that is generated upon the reversal from the magnetic field toward the back surface-side to the magnetic field toward the front surface-side of the magnet11. The electric power (induction current) that is generated in the first electric power generation part42A can be set by the number of turns in the high-density coil, for example. As shown inFIG.2A, the first magnetosensitive part41A, the first electric power generation part42A, and the parts of the first and second magnetic bodies45A and46A on the first magnetosensitive part41A-side are accommodated in a case43A. The case43A is provided with terminals42Aa and42Ab. The high-density coil of the first electric power generation part42A has one end and the other end thereof that are electrically connected to the terminals42Aa and42Ab, respectively. The electric power generated in the first electric power generation part42A can be extracted outside of the first electric signal generation unit31A via the terminals42Aa and42Ab. The second electric signal generation unit31B is disposed in an angular position forming an angle larger than 0° and smaller than 180° from the angular position in which the first electric signal generation unit31A is disposed. An angle between the electric signal generation units31A and31B is selected within a range from 22.5° to 67.5°, for example, and is about 45° inFIG.2B. The second electric signal generation unit31B has a similar configuration to the first electric signal generation unit31A. The second electric signal generation unit31B includes a second magnetosensitive part41B, a second electric power generation part42B, and a second set of first magnetic body45B and a second set of second magnetic body46B. The second magnetosensitive part41B, the second electric power generation part42B, and the second set of first and second magnetic bodies45B and46B are similar to the first magnetosensitive part41A, the first electric power generation part42A, and the first set of first and second magnetic bodies45A and46A, respectively, and the descriptions thereof are thus omitted. The second magnetosensitive part41B, the second electric power generation part42B, and the parts of the first and second magnetic bodies45B and46B on the second magnetosensitive part41B-side are accommodated in a case43B. The case43B is provided with terminals42Ba and42Bb. The electric power generated in the second electric power generation part42B can be extracted outside of the second electric signal generation unit31B via the terminals42Ba and42Bb. Note that at least a part of the magnetosensitive part (for example, the first magnetosensitive part41A and the second magnetosensitive part41B) is disposed spaced apart outside of the magnet11in the diametrical direction of the magnet11or in the parallel direction thereof. For example, when the surfaces (i.e., the surfaces on which the plurality of polarities of the magnet is aligned) of the magnet11orthogonal to the rotary shaft SF are each referred to as one surface and the other surface, the magnetosensitive part is disposed spaced apart outside with respect to a side surface (or a side surface parallel to the axial direction of the rotary shaft SF) of the magnet11orthogonal to one surface or the other surface of the magnet11and along the moving direction of the magnet. The magnetism detection unit12includes magnetic sensors51and52. The magnetic sensor51is disposed in an angular position greater than 0° and smaller than 180° with respect to the second magnetosensitive part41B (second electric signal generation unit31B) in the rotating direction of the rotary shaft SF. The magnetic sensor52is disposed in an angular position (about 45°, inFIG.2B) greater than 22.5° and smaller than 67.5° with respect to the magnetic sensor51in the rotating direction of the rotary shaft SF. As shown inFIG.2C, the magnetic sensor51includes a magnetoresistive element56, a bias magnet (not shown) for applying a magnetic field having predetermined strength to the magnetoresistive element56, and a waveform shaping circuit (not shown) for shaping a waveform from the magnetoresistive element56. The magnetoresistive element56has a full bridge shape where elements56a,56b,56cand56dare connected in series. A signal line between the elements56aand56cis connected to a power supply terminal51p, and a signal line between the elements56band56dis connected to a ground terminal51g. A signal line between the elements56aand56bis connected to a first output terminal51a, and a signal line between the elements56cand56dis connected to a second output terminal51b. The magnetic sensor52has a similar configuration to the magnetic sensor51, and the descriptions thereof are thus omitted. Subsequently, operations of the first electric signal generation unit31A of the present embodiment are described. Hereinafter, the first magnetosensitive part41A and the first electric power generation part42A of the first electric signal generation unit31A shown inFIG.2Bare collectively described as a magnetosensitive member47. A length direction of the magnetosensitive member47is the same as the length direction of the first magnetosensitive part41A, and the center in the length direction of the magnetosensitive member47is the same as the center in the length direction of the first magnetosensitive part41A. Note that since operations of the second electric signal generation unit31B are similar to those of the first electric signal generation unit31A, the descriptions thereof are omitted. FIG.3Ais a plan view depicting the magnet11and the electric signal generation unit31A shown inFIG.2A, andFIGS.3B and3Care sectional views of the magnet11shown inFIG.3A. InFIGS.3A and3B, the magnet11has a flat plate shape along a rotating direction (hereinafter, also referred to as the θ direction) around the rotary shaft SF, has a plurality of polarities (the N pole16A to the S pole16D) different from each other in the θ direction, and has two polarities (the N pole16A and the S pole17A, and the like) different from each other in a thickness direction (in the present embodiment, the axial direction AD1of the rotary shaft SF) orthogonal to the0direction. For this reason, the axial direction AD1can also be referred to as an orientation direction (magnetization direction) of parts of the magnet11having polarities different from each other (the N pole16A, the S pole17A, and the like). As the magnet11rotates in the θ direction, the direction and strength of the magnetic field in the axial direction or the orientation direction AD1are changed. Also, the magnetosensitive member47(or the magnetosensitive part) is disposed near the outer surface of the magnet11so that the length direction thereof is parallel to the front surface (one surface or back surface) of the magnet11having a flat plate shape. InFIG.3A, when the length direction of the magnetosensitive member47is referred to as a direction LD1, the length direction LD1is parallel to the front surface of the magnet11. In the present embodiment, the length direction LD1of the magnetosensitive member47is substantially parallel to the θ direction (circumferential direction), and is also substantially orthogonal to the axial direction AD1that is the magnetization direction of the magnet11(for example, a specific direction in which a direction of a magnetic pole is fixed). Also, as shown inFIG.3C, the length direction of the magnetosensitive member47is disposed so as to be substantially orthogonal to a tangential direction (herein, a direction parallel to the axial direction AD1) of a magnetic field line MF1, which passes through a substantial center (for example, a position of a half of a length in the length direction of the magnetosensitive member47or the magnetosensitive part (41A,41B)) in the length direction of the magnetosensitive member47, of the magnetic field lines of the magnet11. Note that the length direction LD1of the magnetosensitive member47is disposed so as to be substantially orthogonal to the thickness direction orthogonal to the θ direction. Also, the first and second magnetic bodies45A and46A guide magnetic field lines from the two parts of the magnet11(for example, the N pole16A and the S pole17A), which are located at the same angular position in the θ direction and have polarities different from each other, to the length direction LD1of the magnetosensitive member47via one end47aand the other end47bof the magnetosensitive member47. A magnetic field component unnecessary for pulse generation in the electric signal generation unit31A including magnetic field lines generated on a side surface of the magnet11is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47due to large Barkhausen jump (Wiegand effect) in the length direction of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11. For this reason, even when the magnetosensitive member47is disposed near the magnet11and the electric signal generation unit31A is thus downsized, it is possible to effectively generate the stable high-output pulse by using the electric signal generation unit31A through the reversal of the AC magnetic field in the axial direction due to the rotation of the magnet11, without being affected by the unnecessary magnetic field component. FIG.4shows a circuit configuration of the electric power supplying system2and the multi-turn information detection unit3in accordance with the present embodiment. InFIG.4, the electric power supplying system2includes the first electric signal generation unit31A, a rectifier stack61, the second electric signal generation unit31B, a rectifier stack62, and the battery32. Also, the electric power supplying system2includes a regulator63, as the switching unit33shown inFIG.1. The rectifier stack61is a rectifier that rectifies a current flowing from the first electric signal generation unit31A. A first input terminal61aof the rectifier stack61is connected to the terminal42Aa of the first electric signal generation unit31A. A second input terminal61bof the rectifier stack61is connected to the terminal42Ab of the first electric signal generation unit31A. A ground terminal61gof the rectifier stack61is connected to a ground line GL to which the same potential as a signal ground SG is supplied. During the operation of the multi-turn information detection unit3, the potential of the ground line GL becomes a reference potential of the circuit. An output terminal61cof the rectifier stack61is connected to a control terminal63aof the regulator63. The rectifier stack62is a rectifier that rectifies a current flowing from the second electric signal generation unit31B. A first input terminal62aof the rectifier stack62is connected to the terminal42Ba of the second electric signal generation unit31B. A second input terminal62bof the rectifier stack62is connected to the terminal42Bb of the second electric signal generation unit31B. A ground terminal62gof the rectifier stack62is connected to the ground line GL. An output terminal62cof the rectifier stack62is connected to the control terminal63aof the regulator63. The regulator63regulates electric power that is supplied from the battery32to the position detection system1. The regulator63may include a switch64provided on an electric power supply path between the battery32and the position detection system1. The regulator63controls an operation of the switch64, based on the electric signals generated from the electric signal generation units31A and31B. An input terminal63bof the regulator63is connected to the battery32. An output terminal63cof the regulator63is connected a power supply line PL. A ground terminal63gof the regulator63is connected to the ground line GL. The control terminal63aof the regulator63is an enable terminal, and the regulator63keeps a potential of the output terminal63cto a predetermined voltage in a state where a voltage equal to or higher than a threshold value is applied to the control terminal63a. An output voltage (predetermined voltage) of the regulator63is, for example, 3V when a counter67is configured by a CMOS and the like. An operating voltage of a non-volatile memory68of the storage unit14is set to the same voltage as the predetermined voltage, for example. Note that the predetermined voltage is a voltage necessary for electric power supply, and may be not only a constant voltage value, but also a voltage changing in a stepwise manner. A first terminal64aof the switch64is connected to the input terminal63b, and a second terminal64bis connected to the output terminal63c. The regulator63switches conduction and insulation states between the first terminal64aand the second terminal64bof the switch64by using the electric signals supplied from the electric signal generation units31A and31B to the control terminal63a, as a control signal (enable signal). For example, the switch64includes a switching device such as a MOS, a TFT and the like, the first terminal64aand the second terminal64bare a source electrode and a drain electrode, and a gate electrode is connected to the control terminal63a. The switch64is in a state (on state) where the source electrode and the drain electrode can be conductive therebetween, when the gate electrode is charged by the electric signals (electric power) generated from the electric signal generation units31A and31B and a potential of the gate electrode becomes equal to or higher than a threshold value. Note that the switch64may also be provided outside the regulator63, and may be externally attached such as a relay, for example. The multi-turn information detection unit3includes, as the magnetism detection unit12, the magnetic sensors51and52, and analog comparators65and66. The magnetism detection unit12detects the magnetic field formed by the magnet11by using the electric power supplied from the battery32. Also, the multi-turn information detection unit3includes a counter67, as the detection unit13shown inFIG.1, and includes a non-volatile memory68, as the storage unit14. The electric power supply terminal51pof the magnetic sensor51is connected to the power supply line PL. The ground terminal51gof the magnetic sensor51is connected to the ground line GL. An output terminal51cof the magnetic sensor51is connected to an input terminal65aof the analog comparator65. In the present embodiment, the output terminal51cof the magnetic sensor51outputs a voltage corresponding to a difference between a potential of the second output terminal51bshown inFIG.2Cand the reference potential. The analog comparator65is a comparator that compares a voltage output from the magnetic sensor51with a predetermined voltage. A power supply terminal65pof the analog comparator65is connected to the power supply line PL. A ground terminal65gof the analog comparator65is connected to the ground line GL. An output terminal65bof the analog comparator65is connected to a first input terminal67aof the counter67. The analog comparator65outputs an H-level signal from the output terminal when an output voltage of the magnetic sensor51is equal to or higher than a threshold value, and outputs an L-level signal from the output terminal when the output voltage of the magnetic sensor51is lower than the threshold value. The magnetic sensor52and the analog comparator66have similar configurations to the magnetic sensor51and the analog comparator65. A power supply terminal52pof the magnetic sensor52is connected to the power supply line PL. A ground terminal52gof the magnetic sensor52is connected to the ground line GL. An output terminal52cof the magnetic sensor52is connected to an input terminal66aof the analog comparator66. A power supply terminal66pof the analog comparator66is connected to the power supply line PL. A ground terminal66gof the analog comparator66is connected to the ground line GL. An output terminal58bof the analog comparator66is connected to a second input terminal67bof the counter67. The analog comparator66outputs an H-level signal from the output terminal when an output voltage of the magnetic sensor52is equal to or higher than a threshold value, and outputs an L-level signal from the output terminal66bwhen the output voltage of the magnetic sensor52is lower than the threshold value. The counter67counts the multi-turn information of the rotary shaft SF by using the electric power supplied from the battery32. The counter67includes, for example, a CMOS logical circuit and the like. The counter67operates using the electric power that is supplied via a power supply terminal6′7pand a ground terminal67g. The power supply terminal6′7pof the counter67is connected to the power supply line PL. The ground terminal67gof the counter67is connected to the ground line GL. The counter67performs counting processing by using a voltage that is supplied via the first input terminal67a, and a voltage that is supplied via the second input terminal67b, as a control signal. The non-volatile memory68stores at least a part (for example, the multi-turn information) of the rotational position information detected by the detection unit13by using the electric power supplied from the battery32(performs a writing operation). The non-volatile memory68stores a result (multi-turn information) of the counting by the counter67, as the rotational position information detected by the detection unit13. A power supply terminal68pof the non-volatile memory68is connected to the power supply line PL. A ground terminal68gof the storage unit14is connected to the ground line GL. The storage unit14shown inFIG.1includes the non-volatile memory68, and can keep the information written while the electric power is supplied, even in a state where the electric power is not supplied. In the present embodiment, a capacitor69is provided between the rectifier stacks61and62and the regulator63. A first electrode69aof the capacitor69is connected to a signal line for connecting the rectifier stacks61and62and the control terminal63aof the regulator63. A second electrode69bof the capacitor69is connected to the ground line GL. The capacitor69is a so-called smoothing capacitor, and reduces pulsation to reduce a load of the regulator. A constant of the capacitor69is set so that the electric power supply from the battery32to the detection unit13and the storage unit14is kept for a time period in which the rotational position information is detected by the detection unit13and the rotational position information is written into the storage unit14, for example. Also, the battery32includes, for example, a primary battery36such as a button-shaped battery and a rechargeable secondary battery37. The secondary battery37is electrically connected to a power supply unit MCE of the motor control unit MC. During at least a part of a time period (for example, a time period in which a main power supply is in an on state) in which the power supply unit MCE of the motor control unit MC can supply the electric power, the electric power is supplied from the power supply unit MCE to the secondary battery37, and the secondary battery37is recharged by the electric power. During a time period (for example, a time period in which a main power supply is in an off state) in which the power supply unit MCE of the motor control unit MC cannot supply the electric power, the supply of the electric power from the power supply unit MCE to the secondary battery37is cut off. Also, the secondary battery37may be electrically connected to a transmission path of the electric signals from the electric signal generation units31A and31B. In this case, the secondary battery37can be recharged by the electric power of the electric signals from the electric signal generation units31A and31B. For example, the secondary battery37is electrically connected to a circuit between the rectifier stack61and the regulator63. The secondary battery37can be recharged by the electric power of the electric signals that are generated from the electric signal generation units31A and31B by the rotation of the rotary shaft SF, in a state where the supply of the electric power from the power supply unit MCE is cut off. Note that the secondary battery37may also be recharged by the electric power of the electric signals that are generated from the electric signal generation units31A and31B as the motor M is driven to rotate the rotary shaft SF. The encoder device EC in accordance with the present embodiment selects to supply the electric power from which of the primary battery36and the secondary battery37to the position detection system1, in a state where the supply of the electric power from an outside is cut off. The electric power supplying system2includes a power supply switcher (a power supply selection unit, a selection unit)38, and the power supply switcher38switches (selects) to supply the electric power from which of the primary battery36and the secondary battery37to the position detection system1. A first input terminal of the power supply switcher38is electrically connected to a positive electrode of the primary battery36, and a second input terminal of the power supply switcher38is electrically connected to the secondary battery37. An output terminal of the power supply switcher38is electrically connected to the input terminal63bof the regulator63. The power supply switcher38selects the primary battery36or the secondary battery37, as the battery for supplying the electric power to the position detection system1, based on a remaining amount of the secondary battery37, for example. For example, when a remaining amount of the secondary battery37is equal to or greater than a threshold value, the power supply switcher38supplies the electric power from the secondary battery37, and does not supply the electric power from the primary battery36. The threshold value is set, based on electric power that is consumed in the position detection system1, and is set equal to or higher than the electric power that is to be supplied to the position detection system1, for example. For example, when the electric power that is consumed in the position detection system1can be covered by the electric power that from the secondary battery37, the power supply switcher38supplies the electric power from the secondary battery37, and does not supply the electric power from the primary battery36. Also, when the remaining amount of the secondary battery37is less than the threshold value, the power supply switcher38supplies the electric power from the primary battery36, and does not supply the electric power from the secondary battery37. The power supply switcher38may also serve as a charger for controlling the recharging of the secondary battery37, for example, and may determine whether the remaining amount of the secondary battery37is equal to or greater than the threshold value by using remaining amount information of the secondary battery37that is used for control of the recharging. The secondary battery37is used in a combined manner in this way, so that it is possible to delay the consumption of the primary battery36. Therefore, the encoder device EC has no maintenance (for example, replacement) of the battery32or the maintenance frequency is low. Note that the battery32may include at least one of the primary battery36and the secondary battery37. Also, in the above embodiment, the electric power is alternatively supplied from the primary battery36or the secondary battery37. However, the electric power may be supplied from both the primary battery36and the secondary battery37. For example, a processing unit to which the primary battery36supplies the electric power and a processing unit to which the secondary battery37supplies the electric power may be determined, in accordance with power consumption of each processing unit (for example, the magnetic sensor51, the counter67and the non-volatile memory68) of the position detection system1. Note that the secondary battery37may be recharged using at least one of the electric power that is supplied from a power supply unit EC2and the electric power of the electric signals that are generated from the electric signal generation units31A and31B. Subsequently, operations of the electric power supplying system2and the multi-turn information detection unit3are described.FIG.5is a timing chart showing operations of the multi-turn information detection unit3when the rotary shaft SF rotates in the counterclockwise direction (forward rotation). Since a timing chart showing operations of the multi-turn information detection unit3when the rotary shaft SF rotates in the counterclockwise direction (reverse rotation) is inverted to the chart ofFIG.4over time, the descriptions thereof are omitted. In “Magnetic field” ofFIG.5, a solid line indicates a magnetic field at the position of the first electric signal generation unit31A, and a broken line indicates a magnetic field at the position of the second electric signal generation unit31B. “First electric signal generation unit”, and “Second electric signal generation unit” indicate an output of the first electric signal generation unit31A and an output of the second electric signal generation unit31B, respectively, and an output of current flowing in one direction is denoted as positive (+), and an output of current flowing in an opposite direction thereof is denoted as negative (−). “Enable signal” indicates a potential that is applied to the control terminal63aof the regulator63by the electric signals generated from the electric signal generation units31A and31B, and a high level is denoted as “H” and a low level is denoted as “L”. “Regulator” indicates an output of the regulator63, and a high level is denoted as “H” and a low level is denoted as “L”. InFIG.5, “Magnetic field on first magnetic sensor” and “Magnetic field on second magnetic sensor” are magnetic fields formed on the magnetic sensors51and52. The magnetic field formed by the magnet11is shown with a long broken line, the magnetic field formed by a bias magnet is shown with a short broken line, and a synthetic magnetic field thereof is shown with a solid line. “First magnetic sensor” and “Second magnetic sensor” each indicate outputs when the magnetic sensors51and52are constantly driven, an output from the first output terminal is shown with a broken line, and an output from the second output terminal is shown with a solid line. “First analog comparator” and “Second analog comparator” indicate outputs from the analog comparators65and66, respectively. An output when the magnetic sensor and the analog comparator are constantly driven is denoted as “constant drive”, and an output when the magnetic sensor and the analog comparator are intermittently driven is denoted as “intermittent drive”. When the rotary shaft SF rotates in the counterclockwise direction, the first electric signal generation unit31A outputs the current pulse flowing in the forward direction (“+” of “first electric signal generation unit”), at the angular positions of 45° and 225°. Also, the first electric signal generation unit31A outputs the current pulse flowing in the reverse direction (“−” of “first electric signal generation unit”), at the angular positions of 135° and 315°. The second electric signal generation unit31B outputs the current pulse flowing in the reverse direction (“−” of “second electric signal generation unit”), at the angular positions of 90° and 270°. Also, the second electric signal generation unit31B outputs the current pulse flowing in the forward direction (“−” of “second electric signal generation unit”), at the angular positions of 180° and 0° (360°). For this reason, the enable signal is switched to a high level at each of the angular positions of 45°, 90°, 135°, 180°, 225°, 270°, 315°, and 0°. Also, the regulator63supplies a predetermined voltage to the power supply line PL at each of the angular positions of 45°, 90°, 135°, 180°, 225°, 270°, 315°, and 0°, in a state where the enable signal is held at the high level. In the present embodiment, the output of the magnetic sensor51and the output of the magnetic sensor52have a phase difference of 90°, and the detection unit13detects the rotational position information by using the phase difference. The output of the magnetic sensor51is a positive sine wave in a range from the angular position 22.5° to the angular position 112.5°. In the angle range, the regulator63outputs the electric power at the angular positions of 45° and 90°. The magnetic sensor51and the analog comparator65are driven by the electric power supplied at each of the angular positions of 45° and 90°. A signal (hereinafter, referred to as “A-phase signal”) that is output from the analog comparator65is kept at an L-level in a state where the electric power is not supplied, and is an H-level at each of the angular positions of 45° and 90°. Also, the output of the magnetic sensor52is a positive sine wave in a range from the angular position of 157.5° to the angular position of 247.5°. In the angle range, the regulator63outputs the electric power at the angular positions of 180° and 225°. The magnetic sensor52and the analog comparator66are driven by the electric power supplied at each of the angular positions of 180° and 225°. A signal (hereinafter, referred to as “B-phase signal”) that is output from the analog comparator66is kept at an L-level in a state where the electric power is not supplied, and is an H-level at each of the angular positions of 180° and 225°. Herein, when the A-phase signal supplied to the counter67is an H-level (H) and the B-phase signal supplied to the counter67is an L-level, a set of the signal levels is denoted as (H, L). InFIG.5, a set of the signal levels at the angular position of 180° is (L, H), a set of the signal levels at the angular position of 225° is (H, H), and a set of the signal levels at the angular position of 270° is (H, L). When one or both of the detected A-phase signal and B-phase signal is an H-level, the counter67stores the set of the signal levels in the storage unit14. When one or both of the A-phase signal and B-phase signal detected next time is an H-level, the counter67reads out the set of the previous signal levels from the storage unit14and compares the set of the previous signal levels and a set of the current signal levels to determine the rotating direction of the rotary shaft SF. For example, when the set of the previous signal levels is (H, H) and the set of the current signal levels is (H, L), since the angular position in the previous detection is 225° and the angular position in the current detection is 270°, it can be seen that it is a counterclockwise direction (forward rotation). When the set of the current signal levels is (H, L) and the set of the previous signal levels is (H, H), the counter67supplies an up signal, which indicates that the counter will be counted up, to the storage unit14. When the up signal from the counter67is detected, the storage unit14updates the stored multi-turn information to a value increased by 1. In this way, the multi-turn information detection unit3in accordance with the present embodiment can detect the multi-turn information while determining the rotating direction of the rotary shaft SF. In this way, the encoder device EC in accordance with the present embodiment comprises the position detection system1(position detection unit) that detects the rotational position information of the rotary shaft SF (moving part) of the motor M (power supplying unit); the magnet11that rotates in conjunction with the rotary shaft SF and has a plurality of polarities along the rotating direction of the rotary shaft SF (the moving direction or the θ direction); and the electric signal generation unit31A (electric signal generation unit) that has the magnetosensitive member47(magnetosensitive part41A) whose magnetic characteristic is changed by the change in magnetic field associated with relative movement to the magnet11, and generates the electric signal, based on the magnetic characteristic of the magnetosensitive member47, wherein the magnetosensitive member47is disposed so that magnetosensitive member47is spaced apart from a side surface of the magnet11in the direction orthogonal to the rotating direction and the length direction of the magnetosensitive member47is orthogonal to the tangential directions of at least some of the magnetic field lines MF2of the magnet11. According to the present embodiment, a magnetic field component unnecessary for pulse generation in the electric signal generation unit31A including magnetic field lines generated on the side surface of the magnet11is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end in the length direction of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11. For this reason, even when the magnetosensitive member47is disposed near the magnet11and the electric signal generation unit31A is thus made small, it is possible to effectively generate the high-output pulse (electric signal) with high reliability (stable output) by using the electric signal generation unit31A through the reversal of the AC magnetic field in the axial direction due to the rotation of the magnet11, without being affected by the unnecessary magnetic field component. Also, in a case where the encoder device EC comprises the battery32, it is possible to omit the maintenance (for example, replacement) of the battery32or to reduce the maintenance frequency of the battery32by using the electric signal effectively generated from the electric signal generation unit31A. Note that in order to suppress the effect of the magnetic field component unnecessary for pulse generation in the electric signal generation unit31A, it is also considered to cover the circumference of the magnetosensitive member47with a magnetic body. However, when the circumference of the magnetosensitive member47is covered with the magnetic body, the electric signal generation unit becomes larger, which increases the cost and makes it difficult to incorporate the electric signal generation unit into the drive device. Also, a resonance point of the electric signal generation unit31A is lowered, so that it may become weak against vibration shock. Also, in the encoder device EC, the electric power is supplied from the battery32to the multi-turn information detection unit3in a short time after the electric signal is generated from the electric signal generation unit31A, so that the multi-turn information detection unit3is dynamically driven (intermittently driven). After the detection and writing of the multi-turn information are over, the power delivery to the multi-turn information detection unit3is cut off but the counted value is kept because it is stored in the storage unit14. The sequence is repeated each time the predetermined position on the magnet11passes near the electric signal generation unit31A, even in a state where the supply of the electric power from the outside is cut off. Also, the multi-turn information stored in the storage unit14is read by the motor control unit MC and the like when the motor M starts next time, and is used to calculate an initial position of the rotary shaft SF, and the like. In the encoder device EC, the battery32supplies at least a part of the electric power that is consumed in the position detection system1, in accordance with the electric signal generated from the electric signal generation unit31A. Therefore, it is possible to increase the lifetime of the battery32. For this reason, it is possible to omit the maintenance (for example, replacement) of the battery32or to reduce the maintenance frequency of the battery32. For example, when the lifetime of the battery32is longer than other parts of the encoder device EC, it may be unnecessary to replace the battery32. In the meantime, when a magnetosensitive wire such as a Wiegand wire is used, the pulse current (electric signal) is obtained from the electric signal generation unit31A even though the magnet11is rotated at extremely low speed. For this reason, for example, in the state where the electric power is not supplied to the motor M, even though the rotary shaft SF (magnet11) is rotated at extremely low speed, the output of the electric signal generation unit31A can be used as the electric signal. Note that as the magnetosensitive wire (first magnetosensitive part41A), an amorphous magnetostrictive wire and the like can also be used. In this case, for example, the encoder device EC may perform full-wave rectification on the electric signal (current) generated from the electric signal generation unit (for example,31A and31B) by using the rectifier stack (for example, a rectifier), and to supply the rectified electric power to the multi-turn information detection unit3and the like. Also, in the present embodiment, as shown inFIG.3A, since the tip end portions of the first and second magnetic bodies45A and46A of the electric signal generation unit31A are disposed near the parts whose polarities are different from each other at the same angular positions on the front surface (N pole16A to the S pole16D) and back surface (S pole17A to the N pole17D) of the magnet11, the electric signal generation unit31A can be made further smaller. Note that like an electric signal generation unit31C of a modified embodiment shown inFIGS.3D and3E, a tip end portion of a first magnetic body45C on one end-side of the magnetosensitive member47may be disposed near a part (for example, the N pole16A, the S pole16B or the like) having any polarity on the front surface of the magnet11, and a tip end portion of a second magnetic body46C on the other end-side of the magnetosensitive member47may be disposed near a part (for example, the S pole16D, the N pole16A or the like) having different polarity on the front surface of the magnet11. In this case, the first and second magnetic bodies45C and46C guide the magnetic field lines from the two parts (for example, the N pole16A and the S pole16D) of the magnet11, which are located at different positions in the rotating direction and have polarities different from each other, to the length direction of the magnetosensitive member47. Also in the electric signal generation unit31C, a magnetic circuit MC2is formed from the magnet11so as to pass the first magnetic body45C, the magnetosensitive member47and the second magnetic body46C. Therefore, the magnetosensitive member47can effectively output the stable pulse by the reversal of the AC magnetic field due to the rotation of the magnet11, without being affected by the unnecessary magnetic field on the side surface of the magnet11. In the above embodiment, the two electric signal generation units31A and31B are provided. However, the encoder device EC may comprise only one electric signal generation unit31A. Also, the encoder device EC may comprise three or more electric signal generation units. Also, in other embodiments and modified embodiments thereof to be described later, one electric signal generation unit will be described. However, a plurality of electric signal generation units may be provided. Second Embodiment A second embodiment is described with reference toFIGS.6A to6E. Note that inFIGS.6A to6D, the parts corresponding toFIGS.3A to3Care denoted with the same reference signs, and the detailed descriptions thereof are omitted.FIG.6Ais a plan view showing a magnet11A and an electric signal generation unit31D of an encoder device in accordance with the present embodiment,FIG.6Bis a side view ofFIG.6A, andFIG.6Cis an enlarged view showing a part ofFIG.6A. InFIGS.6A and6B, the magnet11A is configured so that the direction and strength of the magnetic field in a radial direction (or the diametrical direction, or a radiation direction) AD2with respect to the rotary shaft SF are changed by rotation. The magnet11A is, for example, an annular member that is coaxial with the rotary shaft SF. The main surfaces (front surface and back surface) of the magnet11A are substantially perpendicular to the rotary shaft SF, respectively. The magnet11A includes an annular magnet on an outer periphery-side where an N pole16E and an S pole16F are alternately disposed in the rotating direction or the circumferential direction (θ direction) of the rotary shaft SF and an annular magnet on an inner periphery-side where an S pole17E and an N pole17F are alternately disposed in the θ direction. Phases of the annular magnet on the outer periphery-side and the annular magnet on the inner periphery-side are offset by 180°. In the magnet11A, a boundary between the S pole17E and the N pole17F on the inner periphery-side substantially matches a boundary between the N pole16E and the S pole16F on the outer periphery-side, with respect to the angular position in the θ direction. The magnet11A has a flat plate shape along the θ direction, and a plurality of polarities (the N pole16E, the S pole16F and the like) along the θ direction. Also, in the magnet11A, a direction orthogonal to the rotating direction (moving direction), i.e., in the present embodiment, the radial direction AD2with respect to the rotary shaft SF is regarded as a width direction of the magnet11A. The magnet11A has polarities (the N pole16E, the S pole17E and the like) different from each other in the width direction (radial direction AD2) orthogonal to the θ direction, on the front surface or back surface. The magnet11A is a permanent magnet magnetized to have a plurality of pairs of polarities (for example, 12 pairs) in the θ direction. In the present embodiment, a magnetization direction (orientation direction) of the magnet11A is the radial direction AD2. In the present embodiment, the magnetosensitive member47of the electric signal generation unit31D is disposed so that the length direction LD2is orthogonal to the front surface of the magnet11A having a flat plate shape, in the vicinity of an outer surface of the magnet11A. Also, the length direction LD2of the magnetosensitive member47in the electric signal generation unit31D is disposed to be orthogonal to the radial direction AD2with being spaced in the diametrical direction (for example, the radial direction) of the magnet11A orthogonal to the rotary shaft SF or in a direction parallel to the diametrical direction. In this case, the length direction LD2is parallel to the axial direction of the rotary shaft SF. In other words, in the present embodiment, the length direction LD2of the magnetosensitive member47is substantially orthogonal to the radial direction AD2that is the magnetization direction of the magnet11A, and is also substantially orthogonal to the θ direction (circumferential direction). Also, a tip end portion of a first magnetic body45D on one end-side of the magnetosensitive member47is disposed near an outer surface of a part of one polarity (for example, the N pole16E) on the outer periphery-side of the magnet11A, and a tip end portion of a second magnetic body46D on the other end-side of the magnetosensitive member47is disposed near an outer surface of a part (for example, the S pole16F) of the other polarity (polarity different from the one polarity) on the outer periphery-side of the magnet11A. In other words, the first and second magnetic bodies45D and46D guide the magnetic field lines from the two parts (for example, the N pole16E and the S pole16F) of the magnet11A, which are located at different positions in the θ direction and have polarities different from each other, to the length direction LD2of the magnetosensitive member47. The other configurations are similar to the first embodiment. Also in the present embodiment, a magnetic circuit MC3is formed from the magnet11A so as to pass the first magnetic body45D, the magnetosensitive member47, and the second magnetic body46D. Also, as shown inFIG.6C, the length direction of the magnetosensitive member47is disposed so as to be substantially orthogonal to a tangential direction (herein, the θ direction) of the magnetic field line MF2, which passes through a substantial center in the length direction of the magnetosensitive member47, of the magnetic field lines generated on the side surface of the magnet11A. A magnetic field component unnecessary for pulse generation in the electric signal generation unit31D including magnetic field lines generated on the side surface of the magnet11A is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11A. For this reason, even when the magnetosensitive member47is disposed near the magnet11A and the electric signal generation unit31D is thus made small, it is possible to effectively generate the high-output pulse (electric signal) by using the electric signal generation unit31D through the reversal of the AC magnetic field in the radial direction AD2due to the rotation of the magnet11A, without being affected by the unnecessary magnetic field component. Also, in a case where the encoder device comprises the battery32, it is possible to omit the maintenance (for example, replacement) of the battery32or to reduce the maintenance frequency of the battery32by using the electric signal effectively generated from the electric signal generation unit31D. Note that in the present embodiment, like an electric signal generation unit31E of a modified embodiment shown inFIGS.6D and6E, a tip end portion of a first magnetic body45E on one end-side of the magnetosensitive member47may be disposed near a part (for example, the N pole16E, the S pole16F or the like) of one polarity on the outer periphery-side of the magnet11A, and a tip end portion of a second magnetic body46E on the other end-side of the magnetosensitive member47may be disposed near a part (for example, the S pole17E, the N pole17F or the like) of different polarity on the inner periphery-side of the magnet11A. In this case, the first and second magnetic bodies45E and46E guide the magnetic field lines from the two parts (for example, the N pole16E and the S pole17E) of the magnet11A, which are located at different positions in the width direction (radial direction AD2) of the magnet11A and have polarities different from each other, to the length direction of the magnetosensitive member47. Also in the electric signal generation unit31E, a magnetic circuit MC4is formed from the magnet11A so as to pass the first magnetic body45E, the magnetosensitive member47, and the second magnetic body46E. Therefore, the magnetosensitive member47can effectively output the stable pulse by the reversal of the AC magnetic field due to the rotation of the magnet11A, without being affected by the unnecessary magnetic field on the side surface of the magnet11A. Third Embodiment A third embodiment is described with reference toFIGS.7A to7C. Note that inFIGS.7A to7C, the parts corresponding toFIGS.6A to6Care denoted with the same reference signs, and the detailed descriptions thereof are omitted.FIG.7Ais a plan view showing a magnet11A and an electric signal generation unit31F of an encoder device in accordance with the present embodiment, andFIG.7Bis a side view showing the magnet11A shown inFIG.7A, as a sectional view. InFIGS.7A and7B, the magnet11A is configured so that the direction and strength of the magnetic field in the radial direction AD2with respect to the rotary shaft SF are changed by rotation. In the present embodiment, the magnetosensitive member47of the electric signal generation unit31F is disposed in a space K so that the length direction LD2is orthogonal to the front surface of the magnet11A having a flat plate shape, in the vicinity of an inner surface of the magnet11A having the space K inside. Also, the length direction LD2of the magnetosensitive member47in the electric signal generation unit31F is disposed to be orthogonal to the radial direction AD2with being spaced in a diametrical direction (for example, the radial direction) of the magnet11A orthogonal to the rotary shaft SF or in a direction parallel to the diametrical direction. In the present embodiment, the length direction LD2of the magnetosensitive member47is substantially orthogonal to the radial direction AD2that is the magnetization direction of the magnet11A, and is also substantially parallel to the axial direction of the rotary shaft SF. Also, a tip end portion of a first magnetic body45F on one end-side of the magnetosensitive member47is disposed near an inner surface of a part of one polarity (for example, the N pole17F) on the inner periphery-side of the magnet11A, and a tip end portion of a second magnetic body46F on the other end-side of the magnetosensitive member47is disposed near an inner surface of a part (for example, the S pole17E) of the other polarity on the inner periphery-side of the magnet11A. In other words, the first and second magnetic bodies45F and46F guide the magnetic field lines from the two parts (for example, the N pole17F and the S pole17E) of the magnet11A, which are located at different positions in the θ direction and have polarities different from each other, to the length direction LD2of the magnetosensitive member47. The other configurations are similar to the first embodiment. Also in the present embodiment, a magnetic circuit MC5is formed from the magnet11A so as to pass the first magnetic body45F, the magnetosensitive member47, and the second magnetic body46F. Also, the length direction of the magnetosensitive member47is disposed so as to be substantially orthogonal to a tangential direction (herein, the θ direction) of the magnetic field line, which passes through a substantial center in the length direction LD2of the magnetosensitive member47, of the magnetic field lines generated on the inner surface of the magnet11A. A magnetic field component unnecessary for pulse generation in the electric signal generation unit31F including magnetic field lines generated on the inner surface of the magnet11A is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11A. For this reason, even when the magnetosensitive member47is disposed on the inner surface of the magnet11A and the electric signal generation unit31F is thus made small, it is possible to effectively generate the high-output pulse (electric signal) by using the electric signal generation unit31F through the reversal of the AC magnetic field in the radial direction AD2due to the rotation of the magnet11A, without being affected by the unnecessary magnetic field component. The other effects are similar to the above-described embodiments. Note that in the present embodiment, like an electric signal generation unit of a modified embodiment shown inFIG.7C, the magnetosensitive member47may be disposed on an outer surface of the magnet11A so that the length direction of the magnetosensitive member47is substantially perpendicular to the outer surface. In this case, a tip end portion of a magnetic body45F1on one end-side of the magnetosensitive member47is disposed near a part (for example, the N pole16E, the S pole16F or the like) of one polarity on the outer periphery-side of the magnet11A, and the other end of the magnetosensitive member47is disposed near a part (for example, the S pole16F, the N pole16E or the like) of different polarity on the outer periphery-side of the magnet11A. In this case, one end of the magnetic body45F1is disposed near one end-side of the magnetosensitive member47, and the other end of the magnetic body45F1is disposed near the part of one polarity on the outer periphery-side of the magnet11A. In other words, in the present modified embodiment, the other magnetic body (the first magnetic body or the second magnetic body) is omitted. In the present modified embodiment, the length direction of the magnetosensitive member47is substantially parallel to the radial direction that is the magnetization direction of the magnet11A, and is also substantially orthogonal to the θ direction (circumferential direction). In the present modified embodiment, a magnetic circuit MC51is formed from the magnet11A so as to pass the magnetic body45F1and the magnetosensitive member47. Therefore, the magnetosensitive member47can effectively output the stable pulse by the reversal of the AC magnetic field due to the rotation of the magnet11A, without being affected by the unnecessary magnetic field on the side surface of the magnet11A. Fourth Embodiment A fourth embodiment is described with reference toFIGS.8A to8E. Note that inFIGS.8A to8E, the parts corresponding toFIGS.3A to3Care denoted with the same reference signs, and the detailed descriptions thereof are omitted. FIG.8Ais a plan view showing a magnet11and an electric signal generation unit31G of an encoder device in accordance with the present embodiment, andFIGS.8B and8Care side views showing the magnet11shown inFIG.8A, as sectional views. InFIGS.8A and8B, the magnet11is configured so that the direction and strength of the magnetic field in the axial direction AD1parallel to the rotary shaft SF are changed by rotation. The magnet11has a plurality of polarities (for example, the N pole16A and the S pole16B) in the θ direction, and also has parts (for example, the N pole16A and the S pole17A) of two polarities different from each other in the thickness direction (radial direction AD2) orthogonal to the θ direction. The magnetization direction of the magnet11is the axial direction AD1. In the present embodiment, the magnetosensitive member47of the electric signal generation unit31G is disposed so that the length direction LD3of the magnetosensitive member47is parallel to the front surface of the magnet11having a flat plate shape and the length direction LD3is perpendicular to the outer surface of the magnet11, in the vicinity of the outer surface of the magnet11. Also, the length direction LD3of the magnetosensitive member47in the electric signal generation unit31G is disposed to be orthogonal to the axial direction AD1with being spaced in the diametrical direction (for example, the radial direction) of the magnet11orthogonal to the rotary shaft SF or in a direction parallel to the diametrical direction. In the present embodiment, the length direction LD3of the magnetosensitive member47is substantially orthogonal to the axial direction AD1that is the magnetization direction of the magnet11, is substantially parallel to the radial direction of the rotary shaft SF, and is substantially orthogonal to the θ direction (circumferential direction). Also, a tip end portion of a first magnetic body45G on one end-side of the magnetosensitive member47is disposed near a part of one polarity (for example, the N pole16A) on the front surface-side of the magnet11, and a tip end portion of a second magnetic body46G on the other end-side of the magnetosensitive member47is disposed near a part (for example, the S pole17A) of the other polarity on the back surface-side of the magnet11. In other words, the first and second magnetic bodies45G and46G guide the magnetic field lines from the two parts (for example, the N pole16A and the S pole17A) of the magnet11, which are located at same angular position in the θ direction and have polarities different from each other, to the length direction LD3of the magnetosensitive member47. The other configurations are similar to the first embodiment. Also in the present embodiment, a magnetic circuit MC6is formed from the magnet11so as to pass the first magnetic body45G, the magnetosensitive member47, and the second magnetic body46G. Also, as shown inFIG.8C, the length direction LD3of the magnetosensitive member47is disposed so as to be substantially orthogonal to a tangential direction MD3(herein, parallel to the axial direction AD1) of the magnetic field line MF3, which passes through a substantial center in the length direction LD3of the magnetosensitive member47, of the magnetic field lines generated on the side surface of the magnet11. A magnetic field component unnecessary for pulse generation in the electric signal generation unit31G including magnetic field lines generated on the side surface of the magnet11is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11. For this reason, even when the magnetosensitive member47is disposed near the magnet11and the electric signal generation unit31G is thus made small, it is possible to effectively generate the high-output pulse (electric signal) by using the electric signal generation unit31G through the reversal of the AC magnetic field in the axial direction AD1due to the rotation of the magnet11, without being affected by the unnecessary magnetic field component. The other effects are similar to the first embodiment. Note that in the present embodiment, like an electric signal generation unit31H of a modified embodiment shown inFIGS.8D and8E, a tip end portion of a first magnetic body45H on one end-side of the magnetosensitive member47may be disposed near a part (for example, the N pole16A, the S pole16B or the like) of one polarity on the front surface-side of the magnet11, and a tip end portion of a second magnetic body46H on the other end-side of the magnetosensitive member47may be disposed near a part (for example, the S pole16D, the N pole16A or the like) of different polarity on the front surface-side of the magnet11. In this case, the first and second magnetic bodies45H and46H guide the magnetic field lines from the two parts of the magnet11(for example, the N pole16A and the S pole16D), which are located at different positions in the θ direction of the magnet11and have polarities different from each other, to the length direction of the magnetosensitive member47. Also in the electric signal generation unit31H, a magnetic circuit MC7is formed from the magnet11so as to pass the first magnetic body45H, the magnetosensitive member47, and the second magnetic body46H, and the magnetosensitive member47can effectively output the stable pulse by the reversal of the AC magnetic field due to the rotation of the magnet11, without being affected by the unnecessary magnetic field on the side surface of the magnet11. Fifth Embodiment A fifth embodiment is described with reference toFIGS.9A and9B. Note that inFIGS.9A and9B, the parts corresponding toFIGS.3A to3Care denoted with the same reference signs, and the detailed descriptions thereof are omitted.FIG.9Ais a plan view showing a magnet11B, magnetic sensors51and52(magnetism detection unit12), and an electric signal generation unit31A of an encoder device in accordance with the present embodiment, andFIG.9Bis a side view showing the magnet11B ofFIG.9A, as a sectional view. InFIGS.9A and9B, the magnet11B includes an annular magnet on the outer periphery-side where an N pole16G and an S pole16H each having an opening angle of 180° and a fan shape are disposed in the rotating direction (θ direction) of the rotary shaft SF and an annular magnet on the inner periphery-side where an S pole16J and an N pole16I each having an opening angle of 180° and a fan shape are disposed in the θ direction. Also, on back surfaces of the N pole16G and the S pole16H on the outer periphery-side, an S pole17G and an N pole17H having the same shape and different polarity are bonded, and on back surfaces of the N pole16I and the S pole16J on the inner periphery-side, an S pole17I and an N pole17J having the same shape and different polarity are bonded. As such, phases of the annular magnet on the outer periphery-side of the magnet11B and the annular magnet on the inner periphery-side are offset by 180°. Also, the magnet11B has two polarities different from each other in the thickness direction (axial direction AD1). In the magnet11B, a boundary between the S pole16J and the N pole16I on the inner periphery-side substantially matches a boundary between the N pole16G and the S pole16H on the outer periphery-side, with respect to the angular position in the θ direction. In the present embodiment, the magnetosensitive member47of the electric signal generation unit31A is disposed so that the length direction of the magnetosensitive member47is parallel to the front surface of the magnet11B having a flat plate shape and the length direction is parallel to the rotating direction (θ direction) of the rotary shaft SF, in the vicinity of the outer surface of the magnet11B. Also, the tip end portion of the first magnetic body45A on one end-side of the magnetosensitive member47is disposed near a part of one polarity (for example, the S pole16H) on the front surface-side of the magnet11B, and the tip end portion of the second magnetic body46A on the other end-side of the magnetosensitive member47is disposed near a part (for example, the N pole17H) of the other polarity on the back surface-side of the magnet11B. In other words, the first and second magnetic bodies45A and46A guide the magnetic field lines from the two parts (for example, the S pole16H and the N pole17H) of the magnet11B, which are located at same angular position in the θ direction and have polarities different from each other, to the length direction of the magnetosensitive member47. Also, the magnetic sensors51and52are disposed so as to overlap a boundary part between the annular magnet on the inner periphery-side and the annular magnet on the outer periphery-side, in the vicinity of the front surface of the magnet11B. An angle between the magnetic sensors51and52is, for example, about 90°. The other configurations are similar to the first embodiment. In the present embodiment, the magnetization direction of the magnet11B with respect to the electric signal generation unit31A is the axial direction AD1, and the magnetization direction with respect to the magnetic sensors51and52is the radial direction. Also, the length direction of the magnetosensitive member47is substantially orthogonal to the axial direction AD1that is the magnetization direction of the magnet11B, and is substantially parallel to the0direction (circumferential direction). Also in the present embodiment, the magnetic circuit MC1is formed from the magnet11B so as to pass the first magnetic body45A, the magnetosensitive member47, and the second magnetic body46A. Also, the length direction of the magnetosensitive member47is disposed so as to be substantially orthogonal to a tangential direction (herein, parallel to the axial direction AD1) of the magnetic field line, which passes through a substantial center in the length direction of the magnetosensitive member47, of the magnetic field lines generated on the side surface of the magnet11B. A magnetic field component unnecessary for pulse generation in the electric signal generation unit31A including magnetic field lines generated on the side surface of the magnet11B is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11B. For this reason, even when the magnetosensitive member47is disposed near the magnet11B and the electric signal generation unit31A is thus made small, it is possible to effectively generate the high-output pulse (electric signal) by using the electric signal generation unit31A through the reversal of the AC magnetic field in the axial direction AD1due to the rotation of the magnet11B, without being affected by the unnecessary magnetic field component. Also, each of the magnetic sensors51and52can detect a change in magnetic field including a magnetic field line MF4that is generated between the annular magnet on the inner periphery-side of the magnet11B and the annular magnet on the outer periphery-side. The encoder device of the present embodiment can obtain the angle and multi-turn information of the rotary shaft SF by using detection results of the magnetic sensors51and52. The other effects are similar to the first embodiment. Sixth Embodiment A sixth embodiment is described with reference toFIGS.10A and10B. Note that inFIGS.10A and10B, the parts corresponding toFIGS.9A and9Bare denoted with the same reference signs, and the detailed descriptions thereof are omitted. FIG.10Ais a plan view showing a rotational disc11D for optical sensor, magnetic sensors51and52, an optical sensor21A, and an electric signal generation unit31A of an encoder device in accordance with the present embodiment, andFIG.10Bis a side view showing a magnet11B ofFIG.10A, as a sectional view. InFIGS.10A and10B, the annular rotational disc11D (which is actually provided with an opening (not shown) through which the rotary shaft SF passes) is fixed to the front surface of the magnet11B. The rotational disc11D and the magnet11B rotate in the0direction in conjunction with the rotary shaft SF. An incremental scale11Da and an absolute scale11Db are formed concentrically on a front surface of the rotational disc11D. Also, the rotational disc11D is disposed between the tip end portion of the first magnetic body45A of the electric signal generation unit31A and the front surface of the magnet11B. The magnetic circuit MC1of the electric signal generation unit31A is formed to pass through the rotational disc11D. Also, the optical sensor21A includes a light-emitting element21Aa that generates illumination light, and light-receiving sensors21Ab and21Ac that receive the illumination light generated from the light-emitting element21Aa and reflected on the incremental scale11Da and the absolute scale11Db. The encoder device of the present embodiment can obtain the rotating angle integrated each time the rotary shaft SF rotates by a predetermined angle from a predetermined reference angle, and an absolute angular position within one-turn of the rotary shaft SF by processing detection signals of the light-receiving sensors21Ab and21Ac in a detection unit (not shown) similar to the detection unit23ofFIG.1. Also, it is possible to obtain the multi-turn information of the rotary shaft SF by performing one counting each time a relative angular position exceeds 360°. Similarly, the encoder device of the present embodiment can obtain the rotating angle and multi-turn information of the rotary shaft SF by using detection results of the magnetic sensors51and52. Also, a magnetic field component unnecessary for pulse generation in the electric signal generation unit31A including magnetic field lines generated on the side surface of the magnet11B is orthogonal to the length direction of the magnetosensitive member47, and the unnecessary magnetic field component does not adversely affect the generation of the magnetic domain wall from one end toward the other end of the magnetosensitive member47caused by the reversal of the AC magnetic field due to the rotation of the magnet11B. For this reason, even when the magnetosensitive member47is disposed near the magnet11B and the electric signal generation unit31A is thus made small, it is possible to effectively generate the high-output pulse (electric signal) by using the electric signal generation unit31A through the reversal of the AC magnetic field in the axial direction AD1due to the rotation of the magnet11B, without being affected by the unnecessary magnetic field component. The other effects are similar to the first embodiment. Note that when the plurality of electric signal generation units is provided, like the embodiments and modified embodiments, the electric power that is output from the electric signal generation unit31A may also be used as a detection signal for detecting the multi-turn information or may be used for supply to a detection system and the like. Note that in the first embodiment, the magnet11is an eight-pole magnet having four poles in the circumferential direction and two poles in the thickness direction. However, the present invention is not limited thereto, and can be changed as appropriate. For example, the number of poles of the magnet11in the circumferential direction may be two or four or more. Note that in the above embodiments, the position detection system1detects the rotational position information of the rotary shaft SF (moving part), as the position information but may also detect at least one of a position in a predetermined direction, a speed and an acceleration, as the position information. The encoder device EC may comprises a rotary encoder or a linear encoder. Also, the encoder device EC may have a configuration where the electric power generation part and the detection unit are provided on the rotary shaft SF and the magnet11is provided outside the moving body (for example, the rotary shaft SF), so that the relative positions of the magnet and the detection unit are changed with movement of the moving part. Also, the position detection system1may not detect the multi-turn information of the rotary shaft SF, and may detect the multi-turn information by an external processing unit of the position detection system1. In the above embodiments, the electric signal generation units31A and31B generate the electric power (electric signal) when a predetermined positional relation with the magnet11is satisfied. The position detection system1may also detect (count) the position information (for example, the rotational position information including the multi-turn information or the angular position information) of the moving part (for example, the rotary shaft SF) by using, as the detection signal, the change in electric power (signal) generated from the electric signal generation units31A and31B. For example, the electric signal generation units31A and31B may be used as sensors (position sensors), and the position detection system1may detect the position information of the moving part by the electric signal generation units31A and31B and one or more sensors (for example, the magnetic sensor and the light-receiving sensor). Also, when the number of the electric signal generation units is two or more, the position detection system1may detect the position information by using the two or more electric signal generation units, as sensors. For example, the position detection system1may detect the position information of the moving part by using the two or more electric signal generation units, as sensors, without using the magnetic sensors, or may detect the position information of the moving part without using the light-receiving sensor. Also, similarly to the magnetic sensor, the position detection system1may determine the rotating direction of the rotary shaft SF by using the two or more electric signal generation units, as sensors, based on two or more electric signals. Also, the electric signal generation units31A and31B may supply at least a part of electric power that is consumed in the position detection system1. For example, the electric signal generation units31A and31B may supply the electric power to a processing unit of the position detection system1, which has relatively small power consumption. Also, the electric power supply system2may not supply the electric power to some of the position detection system1. For example, the electric power supplying system2may intermittently supply the electric power to the detection unit13and may not supply the electric power to the storage unit14. In this case, the electric power may be supplied intermittently or continuously to the storage unit14from a power supply, a battery and the like provided outside the electric power supplying system2. The electric power generation part may generate the electric power by a phenomenon other than the large Barkhausen jump, and for example, may not supply the electric power to the moving part (for example, the rotary shaft SF) and some of the position detection system1. For example, the electric power supplying system2may intermittently supply the electric power to the detection unit13and may not supply the electric power to the storage unit14. In this case, the electric power may be supplied intermittently or continuously to the storage unit14from a power supply, a battery and the like provided outside the electric power supplying system2. The electric power generation part may generate the electric power by a phenomenon other than the large Barkhausen jump, and for example, may generate the electric power by electromagnetic induction associated with the change in magnetic field due to movement of the moving part (for example, the rotary shaft SF). The storage unit in which the detection result of the detection unit is stored may be provided outside the position detection system1or may be provided outside the encoder device EC. [Drive Device] An example of the drive device is described.FIG.11shows an example of a drive device MTR. In descriptions below, the constitutional parts that are the same as or equivalent to the above embodiments are denoted with the same reference signs for omitting or simplifying the descriptions. The drive device MTR is a motor device including an electric motor. The drive device MTR comprises the rotary shaft SF, a main body part (drive part) BD that rotates the rotary shaft SF, and the encoder device EC that detects the rotational position information of the rotary shaft SF. The rotary shaft SF has a load-side end portion SFa, and an anti-load-side end portion SFb. The load-side end portion SFa is connected to another power transmission mechanism such as a decelerator. A scale S is fixed to the anti-load-side end portion SFb via a fixing part. The scale S is fixed, and the encoder device EC is attached. The encoder device EC is an encoder device in accordance with the embodiments, the modified embodiments or combinations thereof. In the drive device MTR, the motor control unit MC shown inFIG.1controls the main body part BD by using a detection result of the encoder device EC. Since the replacement of the battery of the encoder device EC is not required or is less required, the drive device MTR can reduce the maintenance cost. Note that the drive device MTR is not limited to the motor device, and may also be another drive device having a shaft part that rotates by using a hydraulic pressure or pneumatic pressure. [Stage Device] An example of a stage device is described.FIG.12shows a stage device STG. The stage device STG has such a configuration that a rotational table (moving object) TB is attached to the load-side end portion SFa of the rotary shaft SF of the drive device MTR shown inFIG.11. In descriptions below, the constitutional parts that are the same as or equivalent to the above embodiments are denoted with the same reference signs for omitting or simplifying the descriptions. In the stage device STG, when the drive device MTR is driven to rotate the rotary shaft SF, the rotation is transmitted to the rotational table TB. At this time, the encoder device EC detects the angular position of the rotary shaft SF, and the like. Therefore, it is possible to detect an angular position of the rotational table TB by using an output from the encoder device EC. Note that a decelerator and the like may be disposed between the load-side end portion SFa of the drive device MTR and the rotational table TB. Since the replacement of the battery of the encoder device EC is not required or is less required, the stage device STG can reduce the maintenance cost. Note that the stage device STG can be applied to a rotational table provided in a machine tool such as a lathe, for example. [Robot Device] An example of a robot device is described.FIG.13is a perspective view showing a robot device RBT. Note thatFIG.13pictorially shows a part (joint part) of the robot device RBT. In descriptions below, the constitutional parts that are the same as or equivalent to the above embodiments are denoted with the same reference signs for omitting or simplifying the descriptions. The robot device RBT comprises a first arm AR1, a second arm AR2, and a joint part JT. The first arm AR1is connected to the second arm AR2via the joint part JT. The first arm AR1includes an arm part101, a bearing101a, and a bearing101b. The second arm AR2includes an arm part102and a connection part102a. The connection part102ais disposed between the bearing101aand the bearing101bat the joint part JT. The connection part102ais provided integrally with the rotary shaft SF2. The rotary shaft SF2is inserted into both the bearing101aand the bearing101bat the joint part JT. An end portion on a side of the rotary shaft SF2, which is inserted into the bearing101b, is connected to a decelerator RG through the bearing101b. The decelerator RG is connected to the drive device MTR, and decelerates rotation of the drive device MTR to 1/100 or the like, for example, and transmits the same to the rotary shaft SF2. Although not shown and described inFIG.13, the load-side end portion SFa of the rotary shaft SF of the drive device MTR is connected to the decelerator RG. Also, the scale S of the encoder device EC is attached to the anti-load-side end portion SFb of the rotary shaft SF of the drive device MTR. In the robot device RBT, when the drive device MTR is driven to rotate the rotary shaft SF, the rotation is transmitted to the rotary shaft SF2via the decelerator RG. The connection part102ais integrally rotated by the rotation of the rotary shaft SF2, so that the second arm AR2rotates with respect to the first arm AR1. At this time, the encoder device EC detects an angular position of the rotary shaft SF, and the like. Therefore, it is possible to detect an angular position of the second arm AR2by using an output from the encoder device EC. Since the replacement of the battery of the encoder device EC is not required or is less required, the robot device RBT can reduce the maintenance cost. Note that the robot device RBT is not limited to the above configuration, and the drive device MTR can be applied to a variety of robot devices having a joint. | 103,811 |
11858133 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS some embodiments of the present disclosure will be described. It should be noted that the components identical with or similar to each other between the embodiments are given the same reference numerals for the sake of omitting unnecessary explanations. First Embodiment Referring toFIGS.1to13, a first embodiment will be described. FIG.1shows a robot system1including an articulated robot10(simply termed a robot10hereinafter) for industrial use and a control unit20serving as a robot controller. The robot10is also referred to as a manipulator. The robot10is a six-axis vertical articulated robot having a plurality of arms and is controlled by the control unit20. Of course, the number of multiple-joint arms of the robot10is not limited to be six, and can be four, five or another number. The robot10of the present embodiment is small-sized and light-weighted so that, for example, one person can carry the robot. The robot10of the present embodiment is assumed, for example, to cooperate with a person and thus is designed as a person-cooperative robot, eliminating the need for safety fences in the work environment of the robot. The robot10, which is incorporated with the control unit20, is designed to have a total weight of about 4 kg (kilograms) and a load capacity of about 500 g (grams). The size and weight of the robot10are not limited to those mentioned above. Also, the control unit20does not have to be incorporated in the robot10but may be provided on the outside of the robot10. In this case, the robot10and the control unit20are connected to each other by wire or wirelessly so that communication can be established therebetween. The control unit20may be connected to a computer or a mobile terminal, such as a smartphone, or may be connected to an external device90by wire or wirelessly so that communication can be established with the external device. As shown inFIG.1, the robot10includes a base11, a plurality of, i.e., six, arms121to126, and an end effector13. The base11may be fixed to or may not be fixed to a placement surface. The arms121to126and the end effector13are sequentially provided onto the base11. In the present embodiment, the arms are sequentially provided, from the base11side, a first arm121, a second arm122, a third arm123, a fourth arm124, a fifth arm125and a sixth arm126. It should be noted that, if the arms121to126are not specifically referred to in the following description, these arms are simply collectively referred to as arm(s)12. The end effector13is provided to a distal end of the sixth arm126. In this case, base11side ends of the respective arms121to126are positioned closer to the base, and end effector13side ends thereof are positioned closer to the distal end. The end effector13can be used, for example, with a robot hand, which is referred to as a chuck, gripper, suction hand or the like, being attached thereto. These attachments can be appropriately selected according to usage of the robot10. The arms121to126are rotatably connected to each other via a plurality of axes J1to J6serving as joints or linkages. These axes are sequentially provided, from the base11, a first axis J1, a second axis J2, a third axis J3, a fourth axis4, a fifth axis J5and a sixth axis J6. It should be noted that, if the axes J1to J6are not specifically referred in the following description, these axes are simply collectively referred to as axes (or an axis) J. The first axis J1is a rotation axis extending in the vertical direction and connecting the first arm121to the base11so as to be horizontally rotatable relative to the base11. The second axis J2is a rotation axis extending in the horizontal direction and connecting the second arm122to the first arm121so as to be vertically rotatable relative to the first arm121. The third axis J3is a rotation axis extending in the horizontal direction and connecting the third arm123to the second arm122so as to be vertically rotatable relative to the second arm122. The fourth axis J4is a rotation axis extending in the longitudinal direction of the third arm123and connecting the fourth arm124to the third arm122so as to be rotatable relative to the third arm122. The fifth axis J5is a rotation axis extending in the horizontal direction and connecting the fifth arm125to the fourth arm124so as to be vertically rotatable relative to the fourth arm124. The sixth axis J6is a rotation axis extending in the longitudinal direction of the fifth arm125and connecting the sixth arm126to the fifth arm125so as to be rotatable relative to the fifth arm125. As shown inFIG.2, the robot10includes a plurality of, i.e., six, electric motors (hereinafter simply motors)141to146for respectively driving the axes J1to J6. In the present embodiment, the motor corresponding to the first axis J1is referred to as a first motor141, and the motor corresponding to the second axis J2is referred to as a second motor142. Also, the motor corresponding to the third axis J3is referred to as a third motor143, and the motor corresponding to the fourth axis J4is referred to as a fourth motor144. Furthermore, the motor corresponding to the fifth axis J5is referred to as a fifth motor145, and the motor corresponding to the sixth axis J6is referred to as a sixth motor146. It should be noted that, if the motors141to146are not specifically referred to in the following description, these motors are simply collectively referred to as motor(s)14. The motors141to146each have a mechanical or electrical braking function. When brakes are applied, the motors141to146constrain the axes J1to J6respectively corresponding thereto to thereby limit, i.e., inhibit, rotation of the arms121to126connected to each other via the axes J1to J6. In the present embodiment, the state in which brakes are being applied in the motors141to146is referred to as a state in which the axes J1to J6are constrained. Also, the state in which brakes are not being applied in the motors141to146, i.e., the state in which the brakes have been released, is referred to as a state in which the axes J1to J6are not constrained, i.e., a state in which constraints of the axes J1to J6have been released. As shown inFIGS.1and2, the robot10includes a plurality of, i.e., six, detection units151to156. The detection units151to156are respectively provided to the arms121to126to detect a user's manual hold action (i.e., an operator's grip by hand, an operator's manual holding touch: or simply referred to “hold” by hand) on the arms121to126. The detection units151to156may be constituted of touch sensors using, for example, resistance films, electrostatic capacitance, ultrasonic surface elasticity or electromagnetic induction, optical touch sensors, or mechanical switches made, for example, of rubber, resin or metal. Hence, an operator can hold the arms121to126selectively, a pressing force generated due to the manual hold action and exerted on the arms121to126can be detected as electrical signals. Incidentally, various sensors including the detection units151to156are combined with the control unit20to form a control apparatus CA for the robot system, as shown inFIG.2. Although not shown, the various sensors include angle sensors of the motors to control the drive of the motors. That is, the detection units151to156are capable of detecting user's touch (or hold) to the surfaces of the arms121to126. In the present embodiment, the detection units151to156are incorporated into the respective arms121to126and ensured not to be visually recognizable from the user. The detection units151to156may be provided being exposed from the surfaces of the arms121to126. In the present embodiment, of the detection units151to156, the detection unit provided to the first arm121is referred to as a first detection unit151, and the detection unit provided to the second arm122is referred to as a second detection unit152. Also, the detection unit provided to the third arm123is referred to as a third detection unit153, and the detection unit provided to the fourth arm124is referred to as a fourth detection unit154. Furthermore, the detection unit provided to the fifth arm125is referred to as a fifth detection unit155, and the detection unit provided to the sixth arm126is referred to as a sixth detection unit156. It should be noted that, if the detection units151to156are not specifically referred to in the following description, these detection units are simply collectively referred to as detection unit(s)15. The control unit20is mainly constituted of a CPU21and a microcomputer that includes a storage area22, such as a ROM, a RAM or a rewritable flash memory, to control motions of the entire robot10. The storage area22, which has a function of a non-transitory computer readable recording medium, stores robot control programs for driving and controlling the robot10. The control unit20allows the CPU21to execute the robot control program to thereby control motions of the robot10. In the present embodiment, as shown inFIG.2, the detection units151to156and the motors141to146are electrically connected to the control unit20. Based on the results of detection performed by the detection units151to156, the control unit20drives and controls the motors141to146. It has been found that, when the user directly touches and manipulates the robot10, the user's manipulation has the following tendencies. Specifically, when the user manipulates one arm12among the arms121to126by holding the arm12by one hand, the user tends to have a desire to use the base11as a fulcrum and move the arm12integrally with the arms12positioned between the first arm121connected to the base11and the arm12held by the user. Also, when the user manipulates two arms12among the arms121to126by holding the two arms12by the user's respective hands, the user tends to have a desire to use the arm12held by one hand, i.e., base11side hand, as a fulcrum and to move these two arms12integrally with the arms positioned between the arm12held by one hand and the arm12held by the other hand. Therefore, in the present embodiment, if the detection units151to156have not detected any manual hold action of the user, the control unit20constrains the axes J1to J6to inhibit motions of the arms121to126. If the detection units151to156have detected any manual hold action of the user, the control unit20controls the corresponding motors14to release constraints, as shown inFIG.3, of the corresponding axes J which correspond to the detection units15that have detected the user's manual hold action. In other words, unless the user desires to directly touch and manipulate the robot10, no manual hold action on the arms121to126will be detected, as a matter of course, in the detection units151to156. Accordingly, if no manual hold action has been detected in any of the detection units151to156, the control unit20constrains all the axes J1to36to inhibit motions of all the arms121to126. If the user has tried to manipulate the robot10by one hand, i.e., if the user has held one arm12among the arms121to126, the corresponding detection unit15among the detection units151to156will detect the user's manual hold action. Therefore, if any manual hold action to one detection unit15among the detection units151to156has been detected, the control unit20releases constraints of all the axes J closer to the base than is the arm12which includes the detection unit15that has detected the manual hold action and, at the same time, constrains the remaining axes J. Thus, the arms12closer to the base relative to the arm12held by the user are permitted to be movable, while the arms12closer to the distal end relative to the arm12held by the user are inhibited from moving. If the user has tried to manipulate the robot10by both hands, i.e., if the user has held two arms among the arms121to126by the user's respective hands, the manual hold action will be detected by two detection units15among the detection units151to156. Therefore, if any manual hold action has been detected by two detection units15among the detection units151to156, the control unit20releases constraints of all the axes J between two arms12which respectively include the detection units15that have detected the manual hold action and, at the same time, constrains the remaining axes J. Thus, the arms12positioned between the two arms12held by the user are permitted to be movable, while the remaining arms12are inhibited from moving. FIG.3is a table showing correlation between the results of detection derived from the detection units151to156, and the axes J1to J6, according to the present embodiment. In the table ofFIG.3, the mark “-” indicates that the detection units151to156have not detected any manual hold action of the user. Also, the mark “◯” indicates that constraints of the axes J1to J6have been released, and the mark “x” indicates that the axes J1to J6are constrained. Referring now toFIGS.4to12, an example of the user directly manipulating the robot10will be described. For example, if the user's manual hold action has been detected by none of the detection units151to156, the control unit20constrains all the axes J1to J6as indicated in the row that is defined by the mark “-” in the “Arm 1” and “Arm 2” columns inFIG.3. Thus, the arms121to126of the robot10are brought into a state of being fixed. As shown inFIG.4, if the user holds the first and sixth arms121and126by the user's respective hands and tries to move the sixth arm126using the first arm121as a fulcrum, the user's manual hold action is detected by two detection units, i.e., the first and sixth detection units151and156. In this case, as indicated in the row that is defined by the “First detection unit” in the “Arm 1” column and the “Sixth detection unit” in the “Arm 2” column inFIG.3, the control unit20constrains the first axis J1and releases constraints of the second to sixth axes J2to J6. Thus, as shown inFIGS.4and5, the user can move the sixth arm126using the first arm121as a fulcrum. For example, as shown inFIG.6, if the user holds the second and third arms122and123by the user's respective hands and tries to move the third arm123using the second arm122as a fulcrum, the user's manual hold action is detected by two detection units, i.e., the second and third detection units152and153. In this case, as indicated in the row that is defined by the “Second detection unit” in the “Arm 1” column and the “Third detection unit” in the “Arm 2” column inFIG.3, the control unit20releases the constraint of the third axis J3positioned between the second and third arms122and123and constrains all the remaining axes J1, J2and J4to J6. Thus, as shown inFIGS.6and7, the user can move only the third arm123using the second arm122as a fulcrum, with all the arms except for the third arm123being fixed. For example, as shown inFIG.8, if the user holds the third and sixth arms123and126by the user's respective hands and tries to move the fourth, fifth and sixth arms124,125and126using the third arm123as a fulcrum, the user's manual hold action is detected by two detection units, i.e., the third and sixth detection units153and156. In this case, as indicated in the row that is defined by the “Third detection unit” in the “Arm 1” column and the “Sixth detection unit” in the “Arm 2” column inFIG.3, the control unit20releases constraints of all the axes J4to J6between the third and sixth arms123and126and constrains all the remaining axes J1to J3. Thus, as shown inFIGS.8and9, the user can move the fourth, fifth and sixth arms124,125and126, with the first, second and third arms121,122and123being fixed. For example, as shown inFIG.10, if the user holds the sixth arm126by one hand and tries to move the entire robot10, the user's manual hold action is detected by only the sixth detection unit156. In this case, as indicated in the row that is defined by the “Sixth detection unit” in the “Arm 1” column and the mark “-” in the “Arm 2” column inFIG.3, the control unit20releases constraints of all the axes J1to J6positioned between to the base11and the sixth arm126. Thus, as shown inFIGS.10to12, the user can move all the arms121to126by holding the sixth arm126. In a state in which constraints of the axes J1to J6have been released, the control unit20performs gravity compensation by controlling the motors141to146, so that the arms121to126, which are connected to the constraint-released axes J1to J6, do not move due to the own weight of the arms121to126. In this case, the motors141to146generate weak torque that is sufficient to resist against the torque applied to the motors141to146due to the own weight of the arms121to126. Therefore, even when constraints of the axes J1to J6have been released, the user can move the arms121to126with light force, as desired, without feeling the weight of the arms121to126. Referring toFIG.13as well, control performed by the control unit20will be described. When control is started (start), the control unit20allows the CPU21to execute the robot control program with a processing procedure as shown inFIG.13. First, at step S11, the control unit20activates brakes of the respective motors141to146to constrain the axes J1to J6. Thus, the robot10is brought into a state in which the arms121to126are fixed, i.e., locked. Then, at step S12, the control unit20determines whether any hold (gripping or touching) operation has been detected by the detection units151to156. If no manual hold action has been detected (NO at step S12), the control unit20terminates the processing (end). If any manual hold action has been detected (YES at step S12), the control unit20allows the processing to proceed to step S13. At step S13, the control unit20confirms the number of detections performed by the detection units151to156. If any manual hold action has been detected by three or more detection units among the detection units151to156(three or more at step S13), it is difficult to determine the user's intention, i.e., which of the arms121to126the user desires to move. Therefore, in this case, the control unit20allows processing to proceed to step S14to determine the occurrence of error, and then terminates the processing (end). If any manual hold action has been detected by one detection unit among the detection units151to156(“one” at step S13), the control unit20allows the processing to proceed to step S15. Then, at step S15, the control unit20releases constraints, as shown in the table ofFIG.3, of the axes J closer to the base11among the axes J1to J6than is the arm12for which the manual hold action has been detected and, at the same time, retains constraints of the remaining axes J. After that, the control unit20allows the processing to proceed to step S17. If any manual hold action has been detected by two detection units among the detection units151to156(“two” at step S13), the control unit20allows the processing to proceed to step S16. Then, at step S16, the control unit20releases constraints, as shown in the table ofFIG.3, of the axes J positioned between the two arms12for which the manual hold action has been detected and, at the same time, retains constraints of the remaining axes J. After that, the control unit20allows the processing to proceed to step S17. At step S17, the control unit20determines whether detection of the manual hold action is being continued by the detection units151to156(in other words, the manual hold action has been finished or not). If detection of the manual hold action is being continued, the control20iterates the processing of step S17and allows the axes J to be retained in a constrained or constraint-released state at step S15or S16. If a manual hold action is no longer detected by the detection units151to156due to the user losing his/her hold on the arms121to126, the control unit20allows the processing to proceed to step S18. At step S18, the control unit20activates brakes of the respective motors141to146, as in the processing at step S11, to constrain the axes J1to J6. Thus, the robot10is again brought into a state in which the arms121to126are fixed, i.e., locked. While the robot control program is being executed, the control unit20iterates the processing shown inFIG.13from the start to the end. In the foregoing configurations, the control unit20is able to function as a hold determining unit and a constraint controlling unit. In addition, the step S11functionally configures an initial constraint controlling unit (or step), the step S12functionally configures a first determination unit (or step), a determined-NO loop from the step S12functionally configures a first control unit (step), a pair of the steps S12and S13functionally configures second and third determination units (steps), the step S15functionally configures a second control unit (step), the step S16functionally configures a third control unit (step), the step S17functionally configures a fourth determination unit, and the step S18functionally configures a fourth control unit. According to the embodiment described above, the robot system1includes the articulated robot10and the control unit20. The articulated robot10includes the base11, a plurality of, i.e., six, arms121to126, a plurality of, i.e., six, axes J1to J6, a plurality of, i.e., six, motors141to146, and a plurality of, i.e., six, detection units151to156. The arms121to126are connected to each other and provided onto the base11. The axes J1to J6are provided to the respective arms121to126to connect the arms121to126to each other. The motors141to146are provided to the respective axes J1to J6to drive the axes J1to J6. The detection units151to156are provided to the respective arms121to126to detect a user's manual hold action. The control unit20drives and controls the motors141to146based on the detection conditions of the detection units151to156to thereby control motions of the arms121to126. If the detection units151to156have not detected any manual hold action of the user, the control unit20constrains the axes J1to J6. If the detection units151to156have detected any manual hold action of the user, the control unit20controls the corresponding motors14and releases constraints of the corresponding axes J which correspond to the detection units15that have detected the user's manual hold action. With this configuration, to release constraints of the axes J1to J6, the user does not have to press and hold the axis constraint-release button provided to the teaching pendant or to the robot10, or does not have to select the axes J, whose constraints are to be released, by using the teaching pendant. Specifically, the user can hold the arms121to126desired to be moved and can release constraints of the axes J corresponding to the arms12held by the user. Thus, the user can easily move the arms12by directly holding by hand the axes J corresponding to the arms12held by the user. Furthermore, the axes J corresponding to the arms12which are not held by the user are brought into a state of being constrained. Therefore, these arm12with constrained axes J are prevented from unexpectedly moving. Consequently, the user can directly touch and easily move the arms121to126of the robot10and can save time. As described above, if the user holds one of the arms121to126by one hand, the user tends to have a desire to use the base11as a fulcrum and move the arm12held by the user together with the arms12positioned between the base11and the arm12held by the user. In this regard, if any manual hold action has been detected by one detection unit15among the detection units151to156, the control unit20of the present embodiment releases the axes J positioned closer to the base than is the arm12which includes the detection unit15that has detected the manual hold action and, at the same time, constrains the remaining axes J. With this configuration, by holding one of the arms121to126by one hand, the user can integrally move the arm12held by the user together with the arms12positioned between the base11and the arm12held by the user, using the base11as a fulcrum. Accordingly, when moving any one of the arms121to126by one hand, the user can move the desired arm12easily and even more intuitively. As described above, if the user holds two arms12among the arms121to126by the user's respective hands and manipulate them, the user tends to have a desire to integrally move the arm12held by one hand and the arm12held by the other hand together with the arms12positioned between these two arms12, using the arm12held by one hand as a fulcrum. In this regard, if any manual hold action has been detected by two detection units15among the detection units151to156, the control unit20of the present embodiment releases constraints of the axes J between the two arms12which respectively include the detection units15that have detected the manual hold action and, at the same time, constrains the remaining axes J. With this configuration, by holding two arms12among the arms121to126by the user's respective hands, the user can integrally move the arm12held by one hand and the arm12held by the other hand together with the arms12positioned between these two arms12, using the arm12held by one hand, i.e., the base11side hand, as a fulcrum. Specifically, with this configuration, the user can move, as desired, the two arms12held by the user's respective hands together with the arms12positioned between the two arms12. Furthermore, the remaining arms12, i.e., the arms12positioned on the outside of the two arms12held by the user's respective hands, are brought into a state of being fixed. Therefore, when the user moves the two arms12held by the user's respective hands, the outside arms12which are not desired to be moved are prevented from being pulled and moved by the motions of the two arms12held by the user. Thus, the user can easily move the two arms12held by the user's respective hands and bring the robot10into a desired posture. In this way, according to the present embodiment, when moving any two of the arms121to126by the user's respective hands, the user can move the desired two arms12among the arms121to126easily and even more intuitively. When the axes J1to J6have been released, i.e., when the arms121to126are in a state in which they can be held and moved by the user, the control unit20performs gravity compensation by controlling the motors141to146such that the arms121to126do not move due to the own weight of the arms121to126. Thus, the user can move the arms121to126with light force, as desired, without feeling the weight of the arms121to126. With this configuration, when the user moves the arms121to126and tries to stop the motions of the arms at respective target positions, the arms121to126are not allowed to move beyond the target positions, which would otherwise occur due to the inertia of the arms121to126. Thus, the motions of the arms121to126can be immediately stopped with accuracy at the target positions. Consequently, when performing direct teaching, for example, this improved accuracy can be exerted. Second Embodiment Referring toFIG.14, a second embodiment of the present disclosure will be described. The present embodiment relates to control during direct teaching in which the user touches and moves the robot10. In the present embodiment, as shown inFIG.14, the control unit20performs steps S21and S22in addition to the processing shown inFIG.13. Prior to starting the processing shown inFIG.14, the user may switch mode to a direct teaching mode for performing direct teaching, by operating an operation means, not shown, provided to the robot10or to an external device90. As shown inFIG.14, if the detection units151to156detects any manual hold action (YES at step S12), the control unit20allows the processing to proceed to step S21where the control unit20starts recording the teaching, e.g., motion (movement) paths, moving velocities or positions of the arms121to126. If the user loses his/her hold on the arms121to126, the detection units151to156no longer detect a manual hold action (NO at step S17). Thus, the control unit20determines that the direct teaching has finished. Then, at step S22, the control unit20finishes recording of the teaching. With this configuration, the user can easily perform direct teaching by touching or holding the arms121to126. According to the present embodiment, the user can hold and move the arms121to126, so that the motions of the arms121to126are recorded as teaching. During direct teaching, this way of recording motions can contribute to reducing the number of times of operating the operation means, not shown, provided to the robot10or to the external device90. Consequently, the user can even more easily perform direct teaching. Third Embodiment Referring toFIGS.15and16, a third embodiment of the present disclosure will be described. In the third embodiment, the robot10additionally includes a base detection unit16. The base detection unit16has a configuration similar to those of the detection units151to156and is provided to the base11. The base detection unit16detects a user's manual hold action on the base11. If any manual hold action has been detected by the detection units151to156and16, the control unit20controls the corresponding motors14, as shown inFIG.16, corresponding to the detection units15and/or16that have detected the manual hold action and releases the corresponding axes J. In this case, unless any manual hold action is detected by two of the detection units151to156and16, constraints of all the axes J1to J6are retained. Specifically, if any manual hold action has been detected by only one of the detection units151to156and16, none of the axes J1to J6is released from constraints. If any manual hold action has been detected by the base detection unit16and by one of the detection units151to156that are respectively provided to the arms121to126, the control unit20releases constraints of the axes J between the arm12, which includes the detection unit15that has detected the manual hold action, and the base11. Specifically, the control unit20performs control as in the case where a manual hold action has been detected by one detection unit15in the first embodiment. With this configuration, the user cannot move the arms121to126unless the user holds the arms121to126by both hands. Specifically, the robot10is ensured not to be moved if the user holds the arms121to126only by one hand. In other words, to move the arms121to126of the robot10of the present embodiment, the user requires to have a clear intention of holding the arms121to126by both hands. Accordingly, for example, if the user has unintentionally touched or held the arms121to126, the arms121to126are prevented from erroneously moving. In the embodiments described above, the detection units151to156and16may be respectively provided to the arms121to126and the base11by a plural number. Alternatively, the arms121to126and the base11may each have a surface the entirety of which can serve as a region capable of detecting a manual hold action. The robot10is not limited to a six-axis vertical articulated robot but may be a horizontal articulated robot. The number of axes of the robot may be arbitrarily changed. The present disclosure is not limited to the embodiments described above and illustrated in the drawings but may be modified as appropriate within a range not departing from the spirit of the disclosure. | 31,437 |
11858134 | Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION The specification describes a modular automated robotic manufacturing unit with individual robotic cells working conjointly and yet have the capacity to disconnect in the event of failure or bottlenecks. The modular automated robotic manufacturing unit is designed to have flexibility and multi-part handling capability in every cell. In operation, a system includes a plurality of robotic cells that are each configurable to perform a plurality of manufacturing processing steps in a manufacturing process. Each robotic cell includes a set of end effectors and is operable to select from the set of end effectors a selected set of end effectors to perform an assigned manufacturing process step. The assigned manufacturing processing step is one of the manufacturing processing steps in the manufacturing process. In some implementations, the manufacturing process includes a sequence of manufacturing processing steps, i.e., a first manufacturing processing step precedes a second manufacturing processing step in the sequence, and the second manufacturing processing step further precedes a third manufacturing processing step in the sequence, and so on. The set of end effectors for each robotic cell includes end effectors to form a selected set for at least one of the manufacturing processing steps. In some implementations, a robotic cell includes end effectors for all manufacturing processing steps. Alternatively, a robotic cell only includes end effectors for one or some of the manufacturing processing steps. The system also includes a manufacturing execution system that is in data communication with each of the robotic cells. The system is configured to receive data from one or more sensors. The system determines progress of the manufacturing process from the sensor data. Based on the received data, the system determines that a first robotic cell performing a first manufacturing processing step is to be reconfigured to perform a second manufacturing processing step that is different from the first manufacturing processing step. The system can make this determination in any of a variety of ways. For example, the system can determine, from the sensor data that the first robotic cell is in idle condition. As another example, the system can determine, from the sensor data, that there exists a shortage in delivery of materials by a conveyance system to be processed by the first robotic cell in the first manufacturing processing step. In response to a positive determination, the system instructs the first robotic cell to reconfigure from a selected first set of end effectors for performing the first manufacturing processing step to a selected second set of end effectors for performing the second manufacturing processing step. The first robotic cell then performs the reconfiguration in association with an end effector changing system. In addition, the system instructs the conveyance system to reconfigure from delivering materials to be processed by the first manufacturing processing step to the first robotic cell to delivering materials to be processed by the second manufacturing processing step to the first robotic cell. These features and additional features will be described in more detail below. FIG.1shows an example robotic manufacturing system100. The robotic manufacturing system100is an example of a system in which the systems, components, and techniques described below can be implemented. In general, a robotic manufacturing system is a manufacturing system that adopts one or more robotic cells to perform respective manufacturing processes of certain products. Each robotic cell in turn includes one robot. The robot can be any of a variety of robots that are appropriate for the manufacturing processes. An industrial robot with three or more axes, for instance, is a common example of robots that are deployed in many manufacturing environments. Typically, a manufacturing process includes a sequence of manufacturing processing steps. That is, a first manufacturing processing step precedes a second manufacturing processing step in the sequence, and the second manufacturing processing step further precedes a third manufacturing processing step in the sequence, and so on. In some implementations, while being executed in a predetermined sequence, certain manufacturing processing steps are practically independent from others. The respective positions of these steps in the predetermined sequence may only be results of economic or efficiency concerns. In these implementations, without risks of halting the overall manufacturing process, the certain manufacturing processing steps can be executed before the completion of some or all of their preceding steps in the sequence. In particular, in the example depicted inFIG.1, the robotic manufacturing system100includes a plurality of robotic cells, e.g., robotic cells A-N and a plurality of corresponding end effector sets, e.g., end effector sets A-N. Although three robotic cells are depicted inFIG.1, the robotic manufacturing system100may include more or less robotic cells. Each robotic cell in turn includes a robot (not shown in the figure), and optionally, other peripherals such as power source and safety equipment. Each robot and the corresponding peripherals are operable to jointly perform certain manufacturing processing steps. It should be noted that, for convenience, the manufacturing processing steps described in this document are largely described to be performed by a robotic cell, instead of a robot. A robotic cell (e.g., robotic cell A) is operable to select from the respective set of end effectors (e.g., end effectors set A) a selected set of end effectors to perform an assigned manufacturing process step. The assigned manufacturing processing step is one of the manufacturing processing steps in the manufacturing process. In some implementations, a robotic cell selects a selected set of end effectors from a set of end effectors using an end effector changing system. The end effector changing system allows an end effector to automatically couple to or decouple from a robot within the robotic cell. In these implementations, some or all of the robotic cells may further include a respective end effector changing system (not shown in the figure). The robotic manufacturing system100includes a conveyance system140. The conveyance system140handles the flow of products, components, and materials between different robotic cells during the manufacturing process. For example, the conveyance system140can deliver materials required by a manufacturing processing step to a corresponding robotic cell that is configured to perform the particular manufacturing processing step. As another example, the conveyance system140can transport products that have finished being processed by a manufacturing processing step to another robotic cell that is configured to perform a subsequent manufacturing processing step in the sequence. The system100includes a manufacturing execution system110that is in data communication with each of the plurality of robotics cells120A-N and the conveyance system140. The manufacturing execution system110is configured to monitor and control the overall manufacturing process. In some implementations, monitoring and controlling are optionally performed in conjunction with other resource planning and process control systems (not shown in the figure). The system100further includes one or more sensors150that are communicatively coupled to the manufacturing execution system110. Example sensors may include laser distance sensors, photo-eyes, vision cameras, lighting, pneumatic and electrical actuators, and so on. The sensors can provide data describing a latest status of the manufacturing process. The system then determines progress of the manufacturing process from the received sensor data. Although illustrated as being logically associated with the manufacturing execution system110, some or all of the sensors150can be installed anywhere within the robotic manufacturing system100, including inside the robotic cells120A-120N and the conveyance system140. During the manufacturing process, under certain circumstances, the manufacturing execution system110can determine, based on received sensor data, whether to instruct a robotic cell to reconfigure from performing a first manufacturing step to performing a second manufacturing step. Example circumstances may include events of equipment failure, production bottlenecks, and so on. Determining whether to instruct a robotic cell performing a first manufacturing step to reconfigure to perform a second manufacturing processing step that is different from the first manufacturing step will be described in more detail below with respect toFIGS.3and4. FIG.2shows an example format of end effectors data stored in a data store200. The data store200may be physically located in the manufacturing execution system110ofFIG.1. Alternatively, the data store200may be a cloud data store that provides end effectors data to the conveyance system140over a data communication network. The data store200stores and manages data defining the one or more types of end effectors that are required for performing each of the manufacturing processing steps in a particular manufacturing process. For instance, as shown inFIG.2, each piece of data may specify that for a particular manufacturing processing step (e.g., “MS1”) in a particular manufacturing process, a list of one or more types of end effectors (e.g., “{E1}”) are required. Example end effectors in the list may include grippers, fastening tools, material removal tools, welding torches, and so on. FIG.3shows a flow diagram of an example process300for reconfiguring a robotic manufacturing system. For convenience, the process300will be described as being performed by a robotic manufacturing system. For example, the robotic manufacturing system100ofFIG.1, appropriately programmed in accordance with this specification, can perform the process300. The system receives sensor data describing a latest status of the manufacturing process (302). Example sensor data may include robotic cells operation status, end effectors information, raw materials stock level, conveyance system tracking status, and so on. Based on received sensor data, and as will be described in more detail below with respect toFIG.4, the system determines whether to reconfigure a robotic cell (304). If the system determines not to reconfigure any robotic cells, the process300returns to step302. That is, the system maintains current robotic cell configurations for performing the manufacturing process. And the system continues to receive sensor data describing a latest status of the manufacturing process. Alternatively, if the system determines to reconfigure at least one robotic cell, the system proceeds to instruct the robotic cell to reconfigure (306). As one particular example, upon receiving sensor data indicating that a second robotic cell performing a second manufacturing process step, e.g., MS2, is lagging in progress relative to other cells, the system determines that a first robotic cell performing a first manufacturing processing step, e.g., MS1, is to be reconfigured to perform the second manufacturing processing step. In particular, the second manufacturing step is different from the first manufacturing processing step and thus requires at least one new end effector. In response to the determining, the system instructs the first robotic cell to reconfigure from a selected first set of end effectors, e.g., {E1} for performing the first manufacturing processing step to a selected second set of end effectors, e.g., {E2} for performing the second manufacturing processing step. In this particular example, in addition to reconfiguring the first robotic cell, the system further determines that a third robotic cell performing a third manufacturing processing step, MS3, is to be reconfigured to perform the second manufacturing processing step, MS2. The third manufacturing processing step is different from both the first and the second manufacturing processing steps. In response to the determining, the system instructs the third robotic cell to reconfigure from a selected third set of end effectors {E3} for performing the third manufacturing processing step to a selected second set of end effectors {E2} for performing the second manufacturing processing step. In some implementations, following a positive determination and prior to sending instructions, the system further determines the feasibility of reconfiguring the robotic cell. For example, the system determines whether the set of end effectors for the first robotic cell includes end effectors to form a selected set for the second manufacturing processing step. The system can make this determination by comparing sensor data describing the end effectors sets in each of the robotic cells with the end effectors data stored in the data store ofFIG.2. As another example, the system determines whether the second manufacturing processing step is within a work envelope (i.e., a range of movement) of the first robotic cell. A particular manufacturing processing step may be restricted to certain space within the system, i.e., a certain area on the conveyance system. And the work envelope confines the space in which a robotic cell can operate. In other words, a robotic cell can only perform certain manufacturing processing steps that are located within an operable space of the robotic cell. As another example, the system determines whether instructing the first robotic cell to perform the second manufacturing processing step will interfere with other robotic cells in the plurality of robotics cells. That is, the system determines whether reconfiguring the first robotic cell will risk interrupting the operations of other robotic cells. Example interruptions may include collisions between robots from different robotic cells, depletion of raw material supplies to multiple robotic cells by one single cell, and so on. Upon determining a confirmed feasibility of reconfiguring the robotic cell performing a first manufacturing processing step to perform a second manufacturing processing step, the system proceeds to sending the corresponding instructions. After instructing the robotic cell to reconfigure, the system further proceeds to instruct a conveyance system to reconfigure accordingly (308). Referring back to the particular example as described above, after instructing the first robotic cell to reconfigure from performing the first manufacturing processing step to performing the second manufacturing processing step, the system further instructs a conveyance system to reconfigure from delivering materials (e.g., products, components, raw materials, etc.) to be processed by the first manufacturing processing step to the first robotic cell to delivering materials to be processed by the second manufacturing processing step to the first robotic cell. The process300then returns to step302, i.e., the system continues to receive sensor data describing a latest status of the manufacturing process using the one or more reconfigured robotic cells and conveyance systems. FIG.4shows a flow diagram of an example process400for determining whether to reconfigure a robotic manufacturing system. For convenience, the process400will be described as being performed by a robotic manufacturing system. For example, the robotic manufacturing system100ofFIG.1, appropriately programmed in accordance with this specification, can perform the process400. Upon receiving sensor data describing a latest status of the manufacturing process, the system updates the data used for determining whether to perform reconfiguration processes (402). Overall, reconfiguring a robotic cell aims at increasing utilization of manufacturing equipment, which potentially leads to improved overall manufacturing throughputs. The system determines whether a robotic cell is in idle state for more than a threshold time (404). Specifically, the system determines, based on sensor data describing operation status of each robotic cells, if any one of the cells have not been in operation for a predetermined period of time. If the system makes a positive determining, the system then performs idle cell reconfiguration process (406). That is, the system determines to instruct the robotic cell in idle state to reconfigure from performing the currently assigned manufacturing processing step to performing a different manufacturing processing step. The different manufacturing processing step is one of the manufacturing processing steps in the manufacturing process that may or may not be being performed by other robotic cells. In some implementations, the system further determines to instruct the conveyance system to reconfigure from delivering materials to be processed by the currently assigned manufacturing processing step to the robotic cell to delivering materials to be processed by the different manufacturing processing step to the robotic cell. The system determines whether a robotic cell is lagging in progress relative to other cells (408). In particular, the system can make this determination in any of a variety of ways. For example, during a fixed period of time, the system can compare the amount of work completed by a robotic cell with an average amount of work completed by all robotic cells and determine that the robotic cell is lagging in process if the amount of work completed by the robotic cell is below the average. As another example, for a fixed amount of completed work, the system can compare the length of time taken by a robotic cell with an average length of time taken by all robotic cells and determine that the robotic cell is lagging in progress if the length of time is above the average. Specifically, in these two examples, an amount of work completed by a robotic cell can be defined using any appropriate metrics, including a percentage of completed overall manufacturing process, an amount of used raw materials, and the like. As another example, the system can maintain data defining respective desired progresses for each of the robotic cells and determine that a robotic cell is lagging in progress if the actual progress of the robotic cell fails to meet the desired progress. If the system makes a positive determining, the system performs lagging cell reconfiguration process (410). That is, the system determines to instruct one or more other robotics cells to reconfigure from performing one or more other manufacturing processing steps to performing the step that is currently being performed by the lagging cell. In some implementations, the system further determines to instruct the conveyance system to make corresponding reconfigurations. The system determines whether raw materials are exhausted for a particular robotic cell (412). For example, the system can make this determination based on sensor data describing raw materials stock level for each of the manufacturing processing steps that are being performed by respective robotic cells. If the system makes a positive determining, i.e., the raw materials stock for a robotic cell is below a predetermined level, the system performs material exhaustion reconfiguration process (414). That is, the system determines to instruct the robotic cell to reconfigure from performing the currently assigned manufacturing processing step to performing a different manufacturing processing step that does not use the exhausted materials. In some implementations, the system further determines to instruct the conveyance system to make corresponding reconfigurations. While three determining steps (404,408, and412) are depicted in the flow chart, in actual manufacturing processes, there may be more or less factors that may affect the reconfiguration decision. Accordingly, there may be more or less steps in the iteration for determining whether to reconfigure a robotic manufacturing system. At the end of the current iteration, if the determining results in previous steps are all negative, the system maintains current configuration (416), i.e., the system determines not to reconfigure any robotic cells for this iteration. Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any features or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous. | 27,810 |
11858135 | DESCRIPTION OF EMBODIMENT Hereinafter, preferred embodiments of a robot, a method of assembling a robot, and a robot system of the present disclosure will be described in detail with reference to the accompanying drawings. 1. Robot System First, a robot system according to an embodiment will be described. FIG.1is a side view showing a robot system1according to the embodiment.FIG.2is a partial sectional view of a base21shown inFIG.1. In the drawings of the present disclosure, for convenience of explanation, an x axis, a y axis, and a z axis are set as three axes orthogonal to each other, and each of them is indicated by an arrow. In the following description, a direction parallel to the x axis is referred to as an “x axis direction”, a direction parallel to the y axis is referred to as a “y axis direction”, and a direction parallel to the z axis is referred to as a “z axis direction”. In addition, in the following description, a tip side of each illustrated arrow is referred to as “+(plus)”, and a base end side thereof is referred to as “−(minus)”. Further, in the following description, for convenience of description, the +z axis direction is referred to as “upper” and the −z axis direction is referred to as “lower”. In addition, in the present disclosure, “connection” refers to both a state in which two members are in direct contact with each other and a state in which the two members are in indirect contact with each other via an arbitrary member. Further, in the present disclosure, the term “parallel” means a state in which lines, planes, or lines and planes are in a state parallel to each other or in a state inclined in a range of ±5 degrees or less from the parallel state. A robot system1shown inFIG.1includes a robot2and a controller3for controlling the operation of the robot2. Applications of the robot system1are not particularly limited, and examples thereof include operations such as holding, transporting, assembling, and inspecting a workpiece. 2. Robot In the present embodiment, the robot2is a horizontal articulated robot (SCARA robot). The robot2includes a base21(first member) and a robot arm20. In the present embodiment, the robot arm20includes a first arm22(second member), a second arm23, a shaft24, a payload244, and an end effector29, which will be described later. 2.1. Outline of Base The base21is fixed to an installation surface (not shown) with bolts or the like. Examples of the installation surface include a floor surface, a wall surface, a ceiling surface, and an upper surface of a table, a frame, or the like. As shown inFIG.2, the base21includes a housing51, a drive section261, a joint section53, and a belt55. The housing51shown inFIG.2has an approximately rectangular parallelepiped shape having an internal space510. The outer shape of the base21is not limited to the shape shown inFIG.2, and may be any shape. As shown inFIG.2, the drive section261, the joint section53, the belt55, and the like are accommodated in the internal space510of the housing51. Examples of the constituent material of the housing51include a metal material and a resin material, and a metal material is desirably used. Accordingly, the rigidity of the housing51can be increased, and unintended vibration of the base21can be suppressed. The drive section261generates a driving force to rotate the first arm22about the first axis AX1with respect to the base21. Further, the drive section261has an encoder (not shown) for detecting a rotation amount thereof. A rotation angle of the first arm22with respect to the base21can be detected by an output from the encoder. The joint section53transmits a driving force to the first arm22. Specifically, the driving force from the drive section261is converted into an operation of rotating the first arm22. The belt55is an endless belt that transmits the driving force from the drive section261to the joint section53. 2.2. Outline of Robot Arm The robot arm20is connected to the base21, and the posture of the robot arm20is controlled by the controller3. Accordingly, the end effector29is held at a target position and posture, and various operations are realized. In the robot arm20shown inFIG.1, the first arm22, the second arm23, the shaft24, the payload244, and the end effector29are connected to each other in this order. In the following description, for convenience of description, the end effector29side of the robot2is referred to as a “tip end”, and the base21side thereof is referred to as a “base end”. The first arm22is rotatable with respect to the base21about a first axis AX1parallel to the z axis. The second arm23is provided at a tip end portion of the first arm22and is rotatable about a second axis AX2parallel to the first axis AX1. The shaft24is provided at a tip end portion of the second arm23, and is rotatable about a third axis AX3parallel to the second axis AX2and translatable along the third axis AX3. The second arm23includes a base231, an upper cover232, a lower cover233, drive sections262,263,264, a joint section240, and an inertial sensor4. The base231is a core of the second arm23, and supports the drive sections262,263,264and the like. The upper cover232is provided above the base231and covers the drive sections262,263,264and the like. The lower cover233is provided below the base231and covers the inertial sensor4and the like placed on the lower surface of the base231. Examples of the inertial sensor4include an angular velocity sensor and an acceleration sensor. Note that the inertial sensor4may be omitted. The drive section262is located at a base end portion of the base231and generates a driving force to rotate the second arm23about the second axis AX2with respect to the first arm22. The drive section262includes a motor, a reduction gear, an encoder, and the like, which are not illustrated. The rotation angle of the second arm23with respect to the first arm22can be detected by the output from the encoder. The drive section263is located between the base end portion and the tip end portion of the base231, and generates a driving force to rotate a ball screw nut241and translates the shaft24in a direction along the third axis AX3. The drive section263includes a motor, a reduction gear, an encoder, and the like, which are not illustrated. An amount of translational movement of the shaft24with respect to the second arm23can be detected by the output from the encoder. The drive section264is located between the tip end portion and the base end portion of the base231, and generates a driving force to rotate the shaft24about the third axis AX3by rotating a spline nut242. The drive section264includes a motor, a reduction gear, an encoder, and the like, which are not illustrated. The rotation amount of the shaft24with respect to the second arm23can be detected by an output from the encoder. The joint section240transmits a driving force to the shaft24. Specifically, the driving forces from the drive sections263and264are converted into an operation of translation and rotation of the shaft24. The shaft24is a cylindrical shaped shaft. The shaft24is, with respect to the second arm23, translatable along a third axis AX3along the vertical direction and rotatable around the third axis AX3. Further, the ball screw nut241and the spline nut242are installed in the middle of the shaft24in the longitudinal direction, and the shaft24is supported by these. A payload244for mounting the end effector29is provided at a tip end portion of the shaft24. The end effector29attached to the payload244is not particularly limited, and examples thereof include a hand that holds an object, a tool that processes an object, and an inspection device that inspects an object. A configuration in which the end effector29is omitted may be used as the robot arm20. 2.3. Details of Base FIG.3is a perspective view showing only the housing51and the drive section261included in the base21ofFIG.2.FIG.4is a sectional view of the housing51and a top view of the drive section261which are shown inFIG.3.FIG.3andFIG.4are views showing the base21when the assembly is completed. Arrows shown inFIG.3andFIG.4indicate the position of the drive section261at the time of completion of assembly. The housing51shown inFIG.3includes a drive section containing portion51ain which the drive section261is contained and a joint section containing portion51bin which the joint section53is contained. The internal space510is constituted by inside of the drive section containing portion51aand inside of the joint section containing portion51b. As will be described later, the internal space510is defined by a first wall511, a second wall512, a top plate513, a bottom plate514, and the joint section containing portion51b, which constitute the housing51. The drive section containing portion51ahas a substantially rectangular parallelepiped shape and has a long axis extending parallel to the z axis. The drive section containing portion51ahas the first wall511and the second wall512, which extend along the z-y plane. The first wall511and the second wall512are disposed to face each other separated by a distance from each other (with the internal space510in between). The drive section containing portion51ahas the top plate513and the bottom plate514, which extend along the x-y plane. The top plate513is connected to an upper end of the first wall511and an upper end of the second wall512. The bottom plate514is connected to a lower end of the first wall511and a lower end of the second wall512. The second wall512has a side window515connecting the internal space510and the external space. The top plate513has an upper window516(opening) connecting the internal space510and the external space. The side window515may be provided in the first wall511instead of the second wall512, or may be provided in both the first wall511and the second wall512. Further, each of the side window515and the upper window516is closed by a cover (not shown). The edge of the side window515and the edge of the upper window516have a step or a taper so that the cover can be fixed by screws or the like. Note that the lid may not be closed. The drive section containing portion51ais a portion of the housing51on the positive side of the y axis, and includes a full wall window517connecting the internal space510and the external space. The full wall window517extends over the entire long axis of the drive section containing portion51a. The full wall window517can be used as an access route when a member is carried into the internal space510. It should be noted that the full wall window517may extend not entirely but partially along the longitudinal axis of the drive section containing portion51a. The full wall window517may be closed by a lid (not shown). As shown inFIG.3andFIG.4, the drive section containing portion51aincludes a first protrusion518and a second protrusion519. The first protrusion518protrudes from the first wall511toward the internal space510. The second protrusion519protrudes from the second wall512toward the internal space510. The first protrusion518and the second protrusion519as a pair support the drive section261accommodated in the internal space510. As shown inFIG.3, the drive section261includes a motor body261a, a drive pulley261b, and a flange261c. The motor body261agenerates a driving force to rotate about a drive axis AX4. In a state in which the drive section261is installed in the housing51, the drive axis AX4extends substantially parallel to the first axis AX1. The drive pulley261bis connected to the motor body261a. The flange261chas a plate shape with the z axis direction as a thicknesswise direction, and protrudes from the motor body261ain the x axis direction. By placing the flange261con the first protrusion518and the second protrusion519, the drive section261can be positioned in the z axis direction with respect to the housing51. The direction in which the flange261cprotrudes is not limited to the x axis direction as long as the direction intersects with the drive axis AX4. Fixing holes261dpassing through the flange261care provided on the flange261c. Screws (not shown) can be inserted into the fixing holes261d, and the flange261ccan be fastened to the first protrusion518and the second protrusion519using the screws. As shown inFIG.4, each of the first protrusion518and the second protrusion519has a support section521and a missing section522. The support sections521support the flange261cby the flange261cbeing placed thereon. The support section521has a contact surface523that contacts the flange261c. The contact surface523is recessed in comparison with portions of the first protrusion518and the second protrusion519other than the contact surface523. By bringing the flange261cinto contact with the contact surface523, that is, by engaging the contact surface523with the flange261c, the flange261cis supported by the support section521. It is to be noted that the flange261chas a long rectangular shape in the projecting direction thereof, in the x axis direction in the present embodiment, and both ends thereof are in contact with the contact surfaces523. Further, the support section521has fastening holes525extending along the z axis. The fastening holes525are used to fasten the flange261cusing screws after the flange261cis brought into contact with the contact surface523. As will be described in detail later, the missing sections522have a shape that allows both end portions of the flange261cto pass through when the posture of the drive section261is changed in association with the assembly of the robot2. Specifically, the missing sections522include grooves524that penetrate each of the first protrusion518and the second protrusion519in the z axis direction. By providing such grooves524, when the base21is assembled as will be described later, both ends of the flange261ccan move by a path passing through the missing sections522. Accordingly, the belt55can be wound around the drive pulley261bwhile suppressing the load applied to the belt55shown inFIG.2. Note that the grooves524shown inFIG.4may be expanded to the −y axis side from the positions shown inFIG.4. However, if the grooves524are expanded, the mechanical strength of the first protrusion518and the second protrusion519may decrease, and the mechanical strength of the housing51may also decrease. Therefore, the widths of the grooves524, that is, the lengths of the grooves524in the y axis direction are a width that allows both end portions of the flange261cto pass, and are desirably not wider than necessary. The joint section containing portion51bis a portion of the housing51on the −y axis side. The joint section containing portion51bhas a substantially cylindrical shape with an upper end and a lower end open, respectively, and has a long axis extending parallel to the z axis. As shown inFIG.2, the first arm22is connected to the upper end of the joint section containing portion51b. The joint section53shown inFIG.2is accommodated in the joint section containing portion51b. As shown inFIG.2, the joint section53has a driven pulley532and a reduction gear534. The driven pulley532is connected to the reduction gear534. The reduction gear534is connected to the first arm22shown inFIG.2. Examples of the reduction gear534include a planocentric scheme. The belt55transmits the driving force from the drive section261accommodated in the drive section containing portion51ato the joint section53accommodated in the joint section containing portion51b. Therefore, as shown inFIG.2, the belt55is wound over the drive pulley261band the driven pulley532. In a state in which the drive section261and the joint section53are installed in the housing51, the belt55is wound in an annular shape extending in the x-y plane. The constituent material of the belt55is not particularly limited, and examples thereof include a composite material of a stiffener and an elastic material. The belt55composed of such a composite material has a mechanical strength capable of transmitting a high torque driving force. Examples of the stiffener include glass fiber, polyester fiber, nylon fiber, aramid fiber, carbon fiber, cotton thread, and the like, and one kind or a mixed fibers of two or more kinds thereof is used. Among them, glass fiber or carbon fiber is desirably used as the stiffener. Examples of the elastic material include at least one selected from the group consisting of nitrile rubber, carboxylated nitrile rubber, hydrogenated nitrile rubber, chloroprene rubber, chlorosulfonated polyethylene, polybutadiene rubber, natural rubber, EPM, EPDM, urethane rubber, and acrylic rubber. Among them, materials classified into ultra-high hardness synthetic rubber are desirably used as the elastic material. 3. Controller The operation of the robot2is controlled by the controller3. The controller3may be disposed outside the base21as shown inFIG.1, or may be contained in the base21. The controller3controls driving of the drive sections261,262,263and264according to an operation program stored in advance. Accordingly, the posture of the robot arm20is controlled. 4. Method of Assembling Robot Next, an assembling method of the robot according to the embodiment will be described. FIG.5is a process diagram for explaining the method of assembling a robot according to the embodiment.FIGS.6,8,9,11, and12are sectional views for explaining the robot assembling method shown inFIG.5.FIG.7is a top view for explaining the method of assembling the robot shown inFIG.5.FIG.10is a side view for explaining the method of assembling the robot shown inFIG.5. InFIGS.9and11, a part of the belt55is omitted. The method of assembling the robot shown inFIG.5includes a preparation step S102, a first belt winding step S104, a drive section posture changing step S106, a second belt winding step S108, and a flange fixing step S110. Hereinafter, each step will be described. 4.1. Preparation Step In the preparation step S102, the base21(first member) before assembling shown inFIG.6is prepared. The base21before assembling shown inFIG.6includes the housing51, the drive section261, the joint section53, and the belt55, but the belt55is not yet wound to the drive section261. As shown inFIG.3, the housing51includes the first wall511, the second wall512, the first protrusion518, and the second protrusion519. As shown inFIG.2, the drive section261includes the motor body261a, the drive pulley261b, and the flange261c. The joint section53includes the driven pulley532and the reduction gear534. Each of the first protrusion518and the second protrusion519includes a support section521and a missing section522. The housing51is manufactured by, for example, a casting method, a die casting method, or the like. A part of the housing51may be formed by a machining method. Examples of the machining method include cutting and grinding. Examples of the portion formed by the machining method include the contact surfaces523, the grooves524, and the fastening holes525. By forming these portions by a machining method, machining accuracy can be easily increased. For example, the coplanarity of the contact surface523of the first protrusion518and the contact surface523of the second protrusion519, that is, the degree to which both are included in the same plane, can be increased. In addition, the positions of the grooves524and the fastening holes525in the x-y plane and the parallelism between the fastening holes525and the z-axis can be sufficiently close to the design values. As a result, it is possible to increase the accuracy of the position and the posture of the drive section261with respect to the housing51. Further, if the contact surfaces523, the grooves524, the fastening holes525, and the like can be machined by a machining method, it is not necessary to manufacture these components by a casting method or a die casting method, and thus it is possible to reduce difficulty in manufacturing the housing51. The top plate513shown inFIG.3has the upper window516as described before. When viewed from a position along the drive axis AX4, that is, a position above the upper window516, the upper window516overlaps with the missing sections522. Therefore, a machining tool can be inserted from the upper window516to machine the contact surfaces523, the grooves524, the fastening holes525and the like. As a result, manufacture of the housing51becomes easy, and the robot2having an excellent manufacturing easiness can be realized. Then, the joint section53is previously installed in the joint section containing portion51bof the housing51. In the joint section53, as shown inFIG.6, the driven pulley532is positioned below the reduction gear534, and the driven pulley532is installed in a rotatable state. The reduction gear534is fixed to the housing51. 4.2. First Belt Winding Step In the first belt winding step S104, as shown inFIG.6, one end of the belt55is wound around the driven pulley532. As described above, the belt55has a mechanical strength capable of transmitting a high torque driving force. Therefore, the belt55itself has high rigidity, so when one end of the belt55is wound around the driven pulley532, the other end of the belt55is in a state of extending toward the drive section containing portion51a. Also, even if this is not the case, the other end of the belt55does not hang downward. Therefore, in the second belt winding step S108described later, the drive pulley261bcan be inserted inside the other end of the belt55. Accordingly, it is possible to relatively easily perform an operation of winding the belt55around drive pulley261b. 4.3. Drive Section Posture Changing Step In the drive section posture changing step S106, the drive section261is moved to the internal space510of the housing51, and the posture of the drive section261is changed. This makes it easy to insert the drive pulley261binto the inside of the other end of the belt55. Specifically, first, as shown by an arrow M1inFIG.7, the drive section261is moved from the external space toward the internal space510.FIG.7is a schematic view showing a state in which the drive section261is moved from the external space of the housing51toward the internal space510. The drive section261indicated by a solid line inFIG.7is the drive section261located at a position before the movement indicated by the arrow M1, and the drive section261indicated by a broken line inFIG.7is the drive section261located at a position after the movement indicated by the arrow M1. In this step, as shown by the solid line inFIG.7, when the housing51is viewed from the +z axis side to the −z axis side, the drive section261is held at a position where the missing sections522and the flange261coverlap with each other. At this time, the position of the drive section261in the z axis direction is set below the second protrusion519as shown inFIG.6. Further, as shown inFIG.6andFIG.7, the posture of the drive section261is a posture in which the drive axis AX4and the y axis are substantially parallel to each other and the flange261cis substantially parallel to the x-z plane. 4.4. Second Belt Winding Step In the second belt winding step S108, as indicated by an arrow M2inFIG.8, the drive section261is translated toward the +z axis side in a path in which both end portions of the flange261cpass through the missing sections522.FIG.8shows a state in which both end portions of the flange261care passing through the missing sections522. By moving the drive section261upward from below the first protrusion518and the second protrusion519in this manner, it is possible to bring the drive pulley261bcloser from below to the other end of the belt55. That is, assuming that the position in the z axis direction in which the first protrusion518and the second protrusion519are provided is a “reference position”, then by providing the missing sections522, the drive section261can be translated from a lower side (a region on the opposite side than the joint section53) of the reference position to an upper side (a region in which the joint section53is positioned) of the reference position. By allowing such translation, even in a state in which one end of the belt55is wound around the driven pulley532in advance, the drive pulley261bcan be inserted from below inside the other end of the belt55. Accordingly, finally, the belt55can be wound around the drive pulley261bwithout strongly bending the belt55. InFIG.7, a separation distance between the contact surface523of the first protrusion518and the contact surface523of the second protrusion519, that is, a separation distance between the support sections521, is defined as S1, and a separation distance between the groove524of the first protrusion518and the groove524of the second protrusion519, that is, a separation distance between the missing sections522, is defined as S2. These separation distances S1and S2refers to distances in the x axis direction. InFIG.7, the width of the motor body261ais defined as W1, and the width of the flange261cis defined as W2. These widths W1and W2refer to the length in the x axis direction. These separation distances S1and S2and the widths W1and W2satisfy the following formula (1). W1<:S1<W2<S2(1) In the above formula (1), since W1<S1is established, when the drive section261is translated as indicated by an arrow M2inFIG.8, the motor body261acan pass between the support sections521. Further, in the above formula (1), W2<S2is established. Further, as shown inFIG.6, the width W3of the grooves524in the y axis direction is wider than the thickness t1of the flange261c. Therefore, when the drive section261is translated as indicated by an arrow M2inFIG.8, both ends of the flange261ccan pass through the missing sections522. Next, when the flange261chas passed through the missing sections522, the translation is stopped and the drive section261is held.FIG.9shows a state in which the flange261chas passed through the missing sections522, that is, a state in which the movement of the drive section261indicated by the arrow M2inFIG.8has been ended. InFIG.9, the drive pulley261bis in a state in which it is inserted inside the other end of the belt55from below. At this point, the drive pulley261bdoes not necessarily have to be inserted into the other end of the belt55, and it is sufficient that the other end of the belt55and the drive pulley261bare close to each other. FIG.10is a view showing a state in which the flange261chas passed through the missing sections522as viewed from a viewpoint different from that ofFIG.9. As shown inFIG.10, at this time, the flange261cis located above the first protrusion518and the second protrusion519. In this embodiment, even if the flange261cis positioned above the first protrusion518and the second protrusion519, part of the motor body261ais positioned between the first protrusion518and the second protrusion519. Note that the form of the drive section261is not limited to the illustrated form. Next, as shown inFIG.11, the drive section261is rotated about an axis parallel to the x axis as a central axis. To be specific, the drive section261is rotated as indicated by an arrow M3inFIG.11with a ridge line located at a lower end of the flange261cshown inFIG.11as the rotation axis. Accordingly, the drive pulley261bmoves upward and the motor body261amoves downward. As a result, the posture of the drive pulley261binserted inside the belt55changes, and accordingly, the other end of the belt55is wound around the drive pulley261b. Further, both end portions of the flange261ccome into contact with the contact surfaces523. This completes the operation of winding the belt55around the drive pulley261band the operation of positioning the drive section261in the z axis direction. Since S1<W2is established in the above formula (1), the flange261ccan be placed on the contact surfaces523when the drive section261is rotated as indicated by the arrow M3inFIG.11. Accordingly, the flange261ccan be supported by the support sections521. By adopting the procedure in which the belt55is wound around the drive pulley261bwhile changing the posture of the drive section261as described above, when the belt55is wound around the drive pulley261b, it is not necessary to strongly bend the belt55. In addition, in the robot2, the motor body261ais located on the side opposite to the reduction gear534with respect to belt55in the direction along the drive axis AX4. That is, the reduction gear534is located above the belt55, and the motor body261ais located below the belt55. Therefore, the belt55can be wound around the drive pulley261bsimply by inserting the drive pulley261bfrom below to inside the other end of the belt55, and the belt55and the motor body261aare less likely to interfere with each other in the process of the work. Therefore, from that point of view, it is not necessary to strongly bend the belt55. For these reasons, damage to the belt55can be avoided. On the other hand, in the related art, even if an attempt is made to fix the servo motor to the housing while changing the posture of the servo motor so as to insert the pulley into the inside of the timing belt, when the posture of the servo motor is changed, the flange and the protruding portion protruding from the inner wall of the housing interfere with each other. Therefore, the posture of the servo motor cannot be changed, and the pulley cannot be inserted into the inside of the timing belt. In contrast to such a related art, according to the above described structure and procedure, it is possible, while suppressing the load applied to the belt55, to wind the belt55around the drive pulley261bconnected to the motor body261a, and to perform positioning of the drive section261using engagement between the flange261cof the drive section261and the housing51of the base21. Further, it is not necessary to secure a space for bending the belt55. To be specific, in a case where the belt55is wound around the drive pulley261bsupported by the housing51in advance, it is necessary to wind the belt55around the drive pulley261bwhile bending the belt55, and thus a space for largely bending the belt55is necessary. For example, when the belt55is largely bent upward, it is necessary to extend the distance between the reduction gear534and the driven pulley532in the z axis direction in order to avoid interference between the reduction gear534and the belt55. However, if this distance is extended, a load is likely to be applied to the reduction gear534, which causes the life of the reduction gear534to be shortened. In the present embodiment, since it is not necessary to largely bend the belt55, it is possible to reduce the distance between the reduction gear534and the belt55in the z axis direction. As a result, the load applied to the reduction gear534can be reduced, and the life of the reduction gear534can be extended. In addition, a part of the motor body261ais arranged so as to overlap with the joint section containing portion51b. Accordingly, the housing51can be downsized. 4.5. Flange Fixing Step In the flange fixing step S110, as shown inFIG.12, the flange261cis fixed to the support sections521using screws526. To be specific, the screws526shown inFIG.12are inserted into the fixing holes261dof the flange261cshown inFIG.3, and the screws526are screwed into the fastening holes525shown inFIG.4. Note that the method of fixing the flange261cto the support section521is not limited to the method using the screws526, and other methods may be used. Further, the screws526and a tool for screwing the screws526into the fastening hole525can be introduced through the upper window516toward the internal space510. Therefore, by providing the upper window516, the efficiency of the assembly work of the robot2can be enhanced. Further, as described above, the drive section containing portion51ashown inFIG.3has the side window515and the full wall window517. The work of changing the position and the posture of the drive section261can be performed by a worker or a working robot by introducing a hand or an arm through at least one of the side window515or the full wall window517. Therefore, by providing the side window515and the full wall window517, the efficiency of the assembly work of the robot2can be enhanced. The base21is assembled as described above. Thereafter, the robot arm20is connected to the base21, so that the robot2is assembled. When the base21shown inFIG.2is viewed from a position along the drive axis AX4, the missing sections522are located between the drive axis AX4and the joint section53. That is, the positions of the missing sections522in the y axis direction is between the drive axis AX4and the joint section53shown inFIG.2. By providing the missing sections522at such positions, when the posture of the drive section261is changed from the posture shown inFIG.8to the posture shown inFIG.12, the posture change of the drive section261becomes smooth. That is, after the drive section261is translated as indicated by the arrow M2inFIG.8, the operation of inserting the drive pulley261bfrom below inside the other end of the belt55can be easily performed as shown inFIG.9. Further, the upper window516, the side window515, and the full wall window517can also be used as work routes when repair or maintenance is performed after assembling the robot2. With these work routes, it is possible to repair the base21in the installed posture without turning the base21upside down after assembly. Further, it is not necessary to remove the robot arm20from the base21for repair or the like. For this reason, the efficiency of the repair or the like can be enhanced. 5. Effects Achieved by Embodiment As described above, the method of assembling the robot according to the embodiment is a method of assembling the robot2, which has the base21(first member) and the first arm22(second member) that rotates relative to the base21, and the method includes the preparation step S102, the first belt winding step S104, the drive section posture changing step S106, the second belt winding step S108, and the flange fixing step S110. In the preparation step S102, the base21before assembly, which includes the housing51, the drive section261, the joint section53, and the belt55, is prepared. The housing51includes the first wall511, the second wall512, the first protrusion518, and the second protrusion519. The first wall511and the second wall512are disposed to face each other separated by a distance from each other (with the internal space510in between). The first protrusion518protrudes from the first wall511toward the second wall512. The second protrusion519protrudes from the second wall512toward the first wall511. The drive section261includes the motor body261a, the drive pulley261b, and the flange261c. The motor body261agenerates a driving force to rotate about a drive axis AX4. The drive pulley261bis connected to the motor body261a. The flange261cprotrudes from the motor body261ain a direction intersecting with the drive axis AX4. The joint section53includes the driven pulley532and transmits the driving force to the first arm22. The first protrusion518and the second protrusion519each include a support section521and a missing section522. With respect to the support sections521, the mutual separation distance S1is shorter than the length (width W2) in the direction in which the flange261cprotrudes. With respect to the missing sections522, the mutual separation distance S2is longer than the length (width W2) in the direction in which the flange261cprotrudes. In the first belt winding step S104, the belt55is wound around the driven pulley532. In the drive section posture changing step S106, the drive section261is brought close to the belt55by a path in which both end portions of the flange261cin a direction in which the flange261cprotrudes pass through the missing sections522. In the second belt winding step S108, the belt55is wound around the drive pulley261b. In the flange fixing step S110, the flange261cis fixed to the support section521. According to such an assembling method, the belt55can be wound around the drive pulley261bwhile suppressing the load applied to the belt55. In addition, it is possible to perform positioning of the driving section261using engagement between the flange261cof the drive section261and the housing51of the base21. Therefore, according to the assembling method described above, it is possible to assemble the robot2with high reliability while suppressing damage to the belt55. In addition, the robot2according to the embodiment includes the base21(first member) and the first arm22(second member) that rotates relative to the base21. The base21includes the housing51, the drive section261, the joint section53, and the belt55. The housing51includes the first wall511, the second wall512, the first protrusion518, and the second protrusion519. The first wall511and the second wall512are disposed to face each other separated by a distance from each other (with the internal space510in between). The first protrusion518protrudes from the first wall511toward the second wall512. The second protrusion519protrudes from the second wall512toward the first wall511. The drive section261includes the motor body261a, the drive pulley261b, and the flange261c. The motor body261agenerates a driving force to rotate about a drive axis AX4. The drive pulley261bis connected to the motor body261a. The flange261cprotrudes from the motor body261ain a direction intersecting with the drive axis AX4. The joint section53includes the driven pulley532and transmits the driving force to the first arm22. The belt55is wound around the drive pulley261band the driven pulley532. Each of the first protrusion518and the second protrusion519includes the support section521and the missing section522. With respect to the support sections521, the mutual separation distance S1is shorter than the length (width W2) in the projecting direction of the flange261c, and both end portions in the projecting direction of the flange261care supported. With respect to the missing sections522, the mutual separation distance S2is longer than the length (width W2) in the direction in which the flange261cprotrudes, and both ends of the flange261ccan pass through. According to such a configuration, it is possible to obtain the robot2in which the belt55can be wound around the drive pulley261bwhile suppressing the load applied to the belt55. In such a robot2, since damage to the belt55is suppressed, reliability is improved. In addition, by engaging the flange261cof the drive section261with the housing51of the base21, the drive section261can be positioned with respect to the housing51. In the robot2according to the embodiment, when viewed from a position along the drive axis AX4, the missing sections522are located between the drive axis AX4and the joint section53. By providing the missing sections522at such a position, when the posture of the drive section261changes from the posture shown inFIG.8to the posture shown inFIG.12, the posture change of the drive section261becomes smooth. This makes it possible to realize the robot2that is easy to assemble. In addition, in the robot2according to the embodiment, the housing51at least includes the internal space510, which is defined by the first wall511and the second wall512, and the upper window516, which is an opening that connects the internal space510and the external space. When viewed from a position along the drive axis AX4, that is, a position above the upper window516, the upper window516and the missing section522overlap each other. Thus, for example, when the missing sections522are formed in the first protrusion518and the second protrusion519using a processing tool, the processing tool can be introduced into the internal space510through the upper window516. Therefore, the grooves524can be machined without changing the posture of the housing51shown inFIG.12. Further, in the robot2according to the present embodiment, the support sections521have the fastening holes525. The flange261cis fastened to the support sections521using the fastening holes525. According to such a configuration, the flange261ccan be securely fixed to the support sections521, and the fixed condition can be easily released if necessary. For this reason, it is possible to improve maintainability while improving assembly efficiency of the robot2. In addition, in the robot2according to the embodiment, when viewed from a position along the drive axis AX4, the upper window516, which is the opening, and the fastening holes525overlap each other. That is, the fastening holes525are provided so as to be visible from the upper window516when the robot2is viewed from above. According to such a configuration, an operation of machining the fastening holes525and an operation of screwing the screws526into the fastening holes525can be efficiently performed through the upper window516. In addition, in the robot2according to the present embodiment, the support sections521include the contact surfaces523which contact the flange261c. This contact surfaces523are preferably machined surfaces. This makes it possible to increase the coplanarity of the two contact surfaces523. As a result, it is possible to increase the accuracy of the position and the posture of the drive section261with respect to the housing51. The contact surfaces523may not be provided on the support sections521. Further, in the robot2according to the present embodiment, the joint section53has the reduction gear534connected to the driven pulley532. The motor body261ais located on the opposite side than the reduction gear534with respect to the belt55in the direction along the drive axis AX4. Thus, the belt55can be wound around the drive pulley261bwithout strongly bending the belt55and without securing a space for bending the belt55. As a result, damage to the belt55can be avoided, and the distance between the reduction gear534and the belt55can be shortened. As a result, it is possible to extend the lives of the belt55and the reduction gear534. In addition, the robot system1according to the present embodiment includes the robot2and the controller3that controls the operation of the robot2. The robot2, as described above, has high reliability and long life of the belt55and the reduction gear534. Therefore, it is possible to realize the robot system1having high reliability and a long life. The robot, the method of assembling the robot, and the robot system according to the present disclosure have been described above based on the embodiments shown in drawings, but the robot and the robot system according to the present disclosure are not limited to the embodiments. For example, each component of the embodiments may be replaced with an arbitrary configuration having the same function, an arbitrary configuration may be added to the embodiments, or a plurality of the embodiments may be combined. In addition, the method of assembling a robot according to the present disclosure may be a method in which a process for an arbitrary purpose is added to the above described embodiment. | 43,159 |
11858136 | DESCRIPTION OF EMBODIMENTS The present invention is described below in greater detail with reference to the drawings. Aspects to embody the invention (hereinafter referred to as embodiments) described below are not intended to limit the present invention. Components in the embodiments below include components easily conceivable by those skilled in the art and components substantially identical therewith, that is, what are called equivalents. Furthermore, the components in the embodiments below can be appropriately combined. First Embodiment FIG.1is a side view of a parallel link mechanism according to a first embodiment viewed from the side.FIG.2is a sectional view of proximal-end joints cut in the axial direction.FIG.3is a view viewed from an end-effector base in a second direction (toward the proximal-end joints).FIG.4is a view of the parallel link mechanism (without a tool) viewed from a first direction.FIG.5is a view for explaining directions indicating the extensions of the axes of rotation of respective joints.FIG.6is a side view of the parallel link mechanism according to the first embodiment in operation. As illustrated inFIG.1, a parallel link mechanism100according to the first embodiment includes a fixed base1, a plurality of link mechanisms3, a plurality of motors6, and an end-effector base50. The fixed base1is fixed to a base101. Each link mechanism3is coupled to the fixed base1at a first end. The motors6are provided to the fixed base1. The end-effector base50is coupled to a second end of each link mechanism3. The fixed base1has a plate shape. The fixed base1extends along a surface101aof the base101. The fixed base1is fixed to the base101with bolts, which are not illustrated. The fixed base1has a first surface1afacing the end-effector base50. The first surface1ais a flat surface. A virtual reference line Z extending in the normal direction with respect to the first surface1ais set at the center of the fixed base1. The reference line Z is used as a reference to dispose each part of the parallel link mechanism100. The center of the first surface1aof the fixed base1is provided with a fixing part1b. The fixing part1bhas a hole1copening toward the end-effector base50(refer toFIG.2). In the following description, a direction parallel to the reference line Z is referred to as the axial direction. In the axial direction, a direction in which the end-effector base50is disposed when viewed from the fixed base1is referred to as a first direction X1. In the axial direction, a direction in which the fixed base1is disposed when viewed from the end-effector base50is referred to as a second direction X2. A direction orthogonal to the reference line Z (parallel to the first surface1a) is referred to as the horizontal direction. In the horizontal direction, a direction away from the reference line Z is referred to as an outer side in the radial direction. In the horizontal direction, a direction closer to the reference line Z is referred to as an inner side in the radial direction. As illustrated inFIG.1, the motors6are fixed to the first surface1aof the fixed base1. The same number of (three) motors6as the link mechanisms3are provided. The three motors6are disposed at intervals of 120° about the reference line Z. The first surface1aof the fixed base1is provided with two pedestals4and5. The pedestal5protrudes farther in the first direction X1than the pedestal4. One of the three motors6is disposed on the first surface1aof the fixed base1. Another one of the three motors6is disposed on the pedestal4. The remaining one of the three motors6is set on the pedestal5. Therefore, the three motors6differ in position in the axial direction. In the following description, the three motors6are referred to as a first motor7, a second motor8, and a third motor9in the order of being disposed closer to the fixed base1. An output shaft7aof the first motor7extends in the first direction X1. The output shaft7ais provided with a drive pulley7b. Similarly, an output shaft8aof the second motor8and an output shaft9aof the third motor9extend in the first direction X1and are provided with drive pulleys8band9b, respectively. The number of link mechanisms3according to the present embodiment is three. In the following description, the three link mechanisms are referred to as a first link mechanism10, a second link mechanism20, and a third link mechanism30. The link mechanisms3(the first link mechanism10, the second link mechanism20, and the third link mechanism30) include the same technology application elements: a proximal-end joint (a first proximal-end joint11, a second proximal-end joint21, and a third proximal-end joint31), a proximal link (a first proximal link13, a second proximal link23, and a third proximal link33), an intermediate joint (a first intermediate joint14, a second intermediate joint24, and a third intermediate joint34), a distal link (a first distal link15, a second distal link25, and a third distal link35), and a distal-end joint (a first distal-end joint16, a second distal-end joint26, and a third distal-end joint36(which is not illustrated inFIG.1. Refer toFIG.4)). As illustrated inFIG.2, the proximal-end joints (the first proximal-end joint11, the second proximal-end joint21, and the third proximal-end joint31) include a columnar shaft2and cylindrical parts (a first cylindrical part12, a second cylindrical part22, and a third cylindrical part32) rotatably fitted around the shaft2. The shaft2extends in the axial direction. The end of the shaft2in the second direction X2is fitted into the hole1cin the fixing part1b. As a result, the shaft2integrates with the fixed base1, and the center of the shaft2overlaps the reference line Z. The end of the shaft2in the first direction X1is provided with a retaining part2bprotruding toward the outer side in the radial direction from an outer circumferential surface2aof the shaft2. The retaining part2bprevents the first cylindrical part12, the second cylindrical part22, and the third cylindrical part32from coming off the shaft2. The first cylindrical part12is fitted around the outer circumference of the shaft2. An inner circumferential surface12aof the first cylindrical part12is in slidable contact with the outer circumferential surface2aof the shaft2. The outer circumferential surface of the first cylindrical part12has a first driven pulley12b, a first fitted surface12c, and a first coupling surface12din order from the second direction X2to the first direction X1. As illustrated inFIG.1, the first driven pulley12bis disposed in the horizontal direction with respect to the drive pulley7bof the first motor7. An endless belt, which is not illustrated, is suspended between the first driven pulley12band the drive pulley7b. When the first motor7is driven, the power is transmitted to the first cylindrical part12. Thus, the first cylindrical part12rotates about the shaft2(reference line Z). The first fitted surface12cand the first coupling surface12dhave a circular shape in cross section. The end surface of the first cylindrical part12in the first direction X1has a recess12erecessed in the second direction X2. The recess12eaccommodates the retaining part2b. The inner circumferential surface12aof the first cylindrical part12according to the present embodiment is in slidable contact with the outer circumferential surface2aof the shaft2. To rotate the first cylindrical part12more smoothly, the present invention may include a bearing interposed between the inner circumferential surface12aof the first cylindrical part12and the outer circumferential surface2aof the shaft2. As illustrated inFIG.2, the second cylindrical part22is fitted around the outer circumference of the first fitted surface12cof the first cylindrical part12. An inner circumferential surface22aof the second cylindrical part22is in slidable contact with the first fitted surface12c. The outer circumferential surface of the second cylindrical part22has a second driven pulley22b, a second fitted surface22c, and a second coupling surface22din order from the second direction X2to the first direction X1. As illustrated inFIG.1, the second driven pulley22bis disposed in the horizontal direction with respect to the drive pulley8bof the second motor8. An endless belt, which is not illustrated, is suspended between the second driven pulley22band the drive pulley8b. When the second motor8is driven, the power is transmitted to the second cylindrical part22. Thus, the second cylindrical part22rotates about the shaft2(reference line Z). The second fitted surface22cand the second coupling surface22dhave a circular shape in cross section. The inner circumferential surface22aof the second cylindrical part22according to the present embodiment is in slidable contact with the first fitted surface12cof the first cylindrical part12. To rotate the second cylindrical part22more smoothly, the present invention may include a bearing interposed between the inner circumferential surface22aof the second cylindrical part22and the first fitted surface12cof the first cylindrical part12. As illustrated inFIG.2, the third cylindrical part32is fitted around the outer circumference of the second fitted surface22cof the second cylindrical part22. An inner circumferential surface32aof the third cylindrical part32is in slidable contact with the second fitted surface22c. The outer circumferential surface of the third cylindrical part32has a third driven pulley32band a third coupling surface32cin order from the second direction X2to the first direction X1. As illustrated inFIG.1, the third driven pulley32bis disposed in the horizontal direction with respect to the drive pulley9bof the third motor9. An endless belt, which is not illustrated, is suspended between the third driven pulley32band the drive pulley9b. When the third motor9is driven, the power is transmitted to the third cylindrical part32. Thus, the third cylindrical part32rotates about the shaft2(reference line Z). The third coupling surface32chas a circular shape in cross section. The inner circumferential surface32aof the third cylindrical part32according to the present embodiment is in slidable contact with the second fitted surface22cof the second cylindrical part22. To rotate the third cylindrical part32more smoothly, the present invention may include a bearing interposed between the inner circumferential surface32aof the third cylindrical part32and the second fitted surface22cof the second cylindrical part22. The endless belt, which is not illustrated, suspended between the drive pulley7band the first driven pulley12b, the endless belt, which is not illustrated, suspended between the drive pulley8band the second driven pulley22b, and the endless belt, which is not illustrated, suspended between the drive pulley9band the third driven pulley32bdiffer in position in the axial direction. Therefore, the endless belts do not interfere with each other. As illustrated inFIG.3, the proximal links (the first proximal link13, the second proximal link23, and the third proximal link33) extend in the radial direction with a first end pointing to the inner side in the radial direction and a second end pointing to the outer side in the radial direction. The first end of the first proximal link13is coupled to the first coupling surface12dof the first cylindrical part12. The first end of the second proximal link23is coupled to the second coupling surface22dof the second cylindrical part22. The first end of the third proximal link33is coupled to the third coupling surface32cof the third cylindrical part32. The first proximal link13, the second proximal link23, and the third proximal link33are disposed at intervals of 120° about the shaft2when the parallel link mechanism100starts to be operated. A first end of the distal link (the first distal link15, the second distal link25, and the third distal link35) is coupled to the second end of the proximal link (the first proximal link13, the second proximal link23, and the third proximal link33) via the intermediate joint (the first intermediate joint14, the second intermediate joint24, and the third intermediate joint34). The first distal link15, the second distal link25, and the third distal link35are disposed to extend in the circumferential direction about the shaft2when the parallel link mechanism100starts to be operated. The intermediate joints (the first intermediate joint14, the second intermediate joint24, and the third intermediate joint34) and the distal-end joints (the first distal-end joint16, the second distal-end joint26, and the third distal-end joint36) rotatably couple the parts. The intermediate joints and the distal-end joints according to the present embodiment are composed of bolts and nuts. Therefore, the axis of rotation (refer to the extensions of the axes of rotation M1, M2, M3, N1, and N2illustrated inFIG.5) corresponds to the center of the shaft of the bolt. Second ends of the distal links (the first distal link15, the second distal link25, and the third distal link35) have through holes15a,25a, and35a, respectively, through which the shaft of the bolt passes. Similarly, the second ends of the proximal links (the first proximal link13, the second proximal link23, and the third proximal link33) and the first ends of the distal links (the first distal link15, the second distal link25, and the third distal link35) have through holes, which are not illustrated. The length in the radial direction of the proximal links is shorter in the order of the first proximal link13, the second proximal link23, and the third proximal link33. Similarly, the length of the distal links is shorter in the order of the first distal link15, the second distal link25, and the third distal link35. With this configuration, the first link mechanism10moves on the inner circumference side of the second link mechanism20and the third link mechanism30when the parallel link mechanism100is operated. The second link mechanism20moves on the outer circumference side of the first link mechanism10and on the inner circumference side of the third link mechanism30. The third link mechanism30moves on the outer circumference side of the first link mechanism10and the second link mechanism20. In other words, the three link mechanisms3are not in contact with each other. In addition, the proximal links (the first proximal link13, the second proximal link23, and the third proximal link33) and the distal links (the first distal link15, the second distal link25, and the third distal link35) appropriately bent to avoid contact with the cylindrical parts (12,22, and32) and a tool110. As illustrated inFIG.4, the end-effector base50includes a body51, protrusions52, and a support53. The body51has a circular plate shape. The protrusions52protrude toward the outer side in the radial direction from the outer circumference of the body51. The support53is provided at the center of the body51. As illustrated inFIG.1, the body51extends in the horizontal direction when the parallel link mechanism100starts to be operated. The surface of the body51in the first direction X1is a first surface51afacing the direction in which the distal end of the tool110faces. The surface of the body51in the second direction X2is a facing surface51bfacing the fixed base1. As illustrated inFIG.4, pedestals54are provided at the ends of the respective protrusions52on the outer side in the radial direction. Each pedestal54is coupled to the second end of the distal link (the first distal link15, the second distal link25, and the third distal link35) via the distal-end joint (the first distal-end joint16, the second distal-end joint26, and the third distal-end joint36). The pedestal54inclines such that the extension of the axis of rotation of the distal-end joint (the first distal-end joint16, the second distal-end joint26, and the third distal-end joint36) (refer to N1and N2illustrated inFIG.5) faces the end of the tool110in the first direction X1(a distal end P of the tool110). The support53has a cylindrical shape and has a holding hole53apassing therethrough in the axial direction. The support53is provided on the first surface51aof the body51. The holding hole53apasses through the body51. The tool110is inserted and fitted into the holding hole53a. As illustrated inFIG.1, the tool110passes through the end-effector base50. In other words, the tool110protrudes from the facing surface51bin the second direction X2. The support53is provided with bolts53bpassing through the support53in the radial direction. The bolts53bare screwed into the support53. Rotating the bolt53bchanges the amount of protrusion into the holding hole53a. The bolts53bhold the tool110so as to prevent it from coming off the holding hole53a. The tool110has such a shape that the end in the first direction X1protrudes toward the outer side in the radial direction. The end surface of the tool110in the first direction X1is a circular flat surface. The center of the end surface of the tool110in the first direction X1is on the reference line Z. In the following description, the center of the end surface of the tool110in the first direction X1is referred to as the distal end P. The following describes the parallel link mechanism100in detail. When at least one or more of the motors6are driven, the parallel link mechanism100tilts the end-effector base50and changes the posture of the tool110(refer toFIG.6). When the parallel link mechanism100is operated, the end-effector base50and the tool110tilt about a certain point. The certain point is the point of intersection at which the extensions of the axes of rotation of the joints intersect. The following describes the point of intersection (certain point) at which the extensions of the axes of rotation of the joints intersect according to the present embodiment. As illustrated inFIG.5, the axes of rotation of the first proximal-end joint11, the second proximal-end joint21, and the third proximal-end joint31overlap the reference line Z. Therefore, the extensions of the axes of rotation of the first proximal-end joint11, the second proximal-end joint21, and the third proximal-end joint31pass through the distal end P of the tool110. The extension M1of the axis of rotation of the first intermediate joint14, the extension M2of the axis of rotation of the second intermediate joint24, and the extension M3of the axis of rotation of the third intermediate joint34intersect at the distal end P of the tool110. The extension N1of the axis of rotation of the first distal-end joint16, the extension N2of the axis of rotation of the second distal-end joint26, and the extension of the axis of rotation of the third distal-end joint36(which is not illustrated in the first embodiment. Refer to an extension N3inFIG.7according to a second embodiment) intersect at the distal end P of the tool110. As described above, the extensions of the axes of rotation of the proximal-end joints, the extensions of the axes of rotation of the intermediate joints, and the extensions of the axes of rotation of the distal-end joints intersect at the distal end P of the tool110. Thus, the certain point according to the present embodiment is at the distal end P of the tool110. When the parallel link mechanism100according to the first embodiment is operated, the tool110changes the posture about the distal end P as illustrated inFIG.6. Therefore, the position of the distal end P of the tool110is not displaced. As described above, the parallel link mechanism100according to the first embodiment includes the fixed base1, the end-effector base50, and at least three or more link mechanisms3. The fixed base1is fixed to the base101. The end-effector base50is disposed away from the fixed base1in the first direction X1and supports the tool110. The link mechanisms3are each coupled to the fixed base1at a first end and to the end-effector base50at a second end. The end-effector base50includes the support53and the facing surface51b. The support53supports the tool110such that the distal end of the tool110points in the first direction X1. The facing surface51bfaces the second direction X2in which the fixed base1is disposed when viewed from the end-effector base50. The link mechanisms3each include the proximal-end joint (11,21, and31), the proximal link (13,23, and33), the intermediate joint (14,24, and34), the distal link (15,25, and35), and the distal-end joint (16,26, and36). The proximal-end joint (11,21, and31) is rotatably coupled to the fixed base1. The proximal link (13,23, and33) is coupled to the proximal-end joint (11,21, and31) at a first end. The intermediate joint (14,24, and34) is provided at a second end of the proximal link (13,23, and33). The distal link (15,25, and35) is rotatably coupled to the proximal link (13,23, and33) at a first end via the intermediate joint (14,24, and34). The distal-end joint (16,26, and36) rotatably couples a second end of the distal link (15,25, and35) to the end-effector base50. The point of intersection at which the extensions of the axes of rotation of the proximal-end joints (11,21, and31), the extensions (M1, M2, and M3) of the axes of rotation of the intermediate joints (14,24, and34), and the extensions (N1and N2) of the axes of rotation of the distal-end joints intersect is the center of rotation of the end-effector base50. The center of rotation of the end-effector base50is positioned in the first direction X1with respect to the facing surface51band overlaps the distal end P of the tool110. The proximal-end joints (11,21, and31) are coaxially disposed. The parallel link mechanism100includes the motors6that rotate the proximal links (13,23, and33) about the axis of rotation of the proximal-end joints (11,21, and31). In the parallel link mechanism100according to the first embodiment described above, the distal end P of the tool110does not move when the tool110changes the posture. While the first embodiment has been described above, the parallel link mechanism according to the present invention is not limited to that described in the first embodiment. For example, the position of the center of rotation of the end-effector base50is not limited to the example according to the first embodiment. The following describes a modification obtained by changing the position of the center of rotation of the end-effector base50. The center of rotation of the end-effector base50according to the present invention may be slightly deviated in the first direction X1or the horizontal direction with respect to the distal end P of the tool110. This modification can also reduce the amount of movement of the distal end P of the tool110. Alternatively, the center of rotation of the end-effector base50may be located between the facing surface51band the distal end P of the tool110on the center line of the tool110. Specifically, as illustrated inFIG.5, the reference line Z passing through the center line of the tool110passes through a point Q on the facing surface51bof the end-effector base50. The center of rotation of the end-effector base50may be located on the reference line Z and between the point Q and the distal end P. According to the modification, the center of rotation of the end-effector base50overlaps the distal end P of the tool110when viewed from the axial direction. In other words, the distance between the center of rotation of the end-effector base50and the center line of the tool110is zero. Therefore, the amount of movement of the distal end P of the tool110can be reduced. As described above, the center of rotation of the end-effector base50according to the present invention simply needs to be located in the first direction X1with respect to the facing surface51bof the end-effector base50. With this configuration, the distance from the center of rotation of the end-effector base50to the distal end P of the tool110is relatively short, and the amount of movement of the distal end P of the tool110can be reduced. While the first embodiment has been described above, the parallel link mechanism according to the present invention is not limited to the example described in the first embodiment. While the three motors6according to the first embodiment, for example, are disposed at equal intervals (intervals of 120°) about the reference line Z, they may be differently disposed in the parallel link mechanism according to the present invention. In other words, the following parallel link mechanisms are included in the parallel link mechanism according to the present invention: a parallel link mechanism in which the three motors6are disposed in the circumferential direction about the reference line Z, but their intervals are not equal, and a parallel link mechanism in which the three motors6are collectively disposed in the same direction when viewed from the reference line Z. Second Embodiment The following describes a parallel link mechanism100A according to a second embodiment. Components technically the same as those described in the first embodiment are denoted by like reference numerals, and detailed explanation thereof is omitted. FIG.7is a side view of the parallel link mechanism according to the second embodiment in operation viewed from the side. The parallel link mechanism100A according to the second embodiment is different from the parallel link mechanism100according to the first embodiment in that it does not include the motors6. The parallel link mechanism100A according to the second embodiment is different from the parallel link mechanism100according to the first embodiment in that the proximal-end joints (11A,21A, and31A) are not coaxially disposed. The following describes only the differences. Motors according to the second embodiment are provided to a device (or a base), which is not illustrated, to which the parallel link mechanism100A is fixed. The three link mechanisms3(10,20, and30) are operated by power transmitted from the motors in the device (or the base), which is not illustrated. As described above, the parallel link mechanism according to the present invention does not necessarily include the motors. There are no particular restrictions on the positions of the three motors provided to the device, which is not illustrated, to which the parallel link mechanism100A is fixed. The proximal-end joints (11A,21A, and31A) are dispersedly disposed on a first surface a of the fixed base1. The extensions of the rotational axes of the proximal-end joints (11A,21A, and31A) (only an extension L of the axis of rotation of the first proximal-end joint11A is illustrated inFIG.7) intersect at the distal end P of the tool110. As described above, the parallel link mechanism100A according to the second embodiment also tilts the end-effector base50about the distal end P of the tool110. Therefore, the position of the distal end P of the tool110is not displaced. While the first and the second embodiments have been described above, the number of link mechanisms3according to the present invention is not limited to three. The number of link mechanisms3simply needs to be at least three or more and may be four. REFERENCE SIGNS LIST 1fixed base2shaft3link mechanism6motor10first link mechanism11,11A first proximal-end joint12first cylindrical part13first proximal link14first intermediate joint15first distal link16first distal-end joint20second link mechanism21,21A second proximal-end joint22second cylindrical part23second proximal link24second intermediate joint25second distal link26second distal-end joint30third link mechanism31,31A third proximal-end joint32third cylindrical part33third proximal link34third intermediate joint35third distal link36third distal-end joint50end-effector base51bfacing surface53support100,100A parallel link mechanism101baseL, M1, M2, M3, N1, N2extensionP distal endQ pointZ reference line | 27,925 |
11858137 | The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. Referring toFIGS.1-3, a mechanical device for grasping an object without a power source (e.g., electricity, hydraulics, pneumatics) is illustrated and generally indicated by reference numeral20. The mechanical device comprises a receiver22and opposed grabber assemblies24. The receiver22is configured to be attached to an arm or component of an exoskeleton suit, or another robotic/automated mechanism (not shown). Accordingly, the receiver22in this form includes a platform26having a plurality of holes/openings28that receive bolts (not shown) to secure the receiver22to the arm/component of an exoskeleton suit or other robotic/automated mechanism. The opposed grabber assemblies24are secured to outboard portions of the receiver22and are arranged to grasp an object such as a shaft10. While two (2) opposed grabber assemblies are shown, it should be understood that the teachings of the present disclosure may be applied to at least one grabber assembly (one form of which is described in greater detail below) or more than two (2) grabber assemblies as illustrated herein. Further, the grabber assemblies24may be arranged in a variety of positions relative to the receiver22, and thus the opposed grabber assemblies24as illustrated and described herein should not be construed as limiting the scope of the present disclosure. In yet another variation, the receiver22is an optional component as a grabber assembly24may be directly secured to arm/component of an exoskeleton suit or other robotic/automated mechanism. As best shown inFIG.3, each of the grabber assemblies24comprises a first arm30having a proximal end portion32, a distal end portion34, and a hook36disposed at the distal end portion34. Similarly, a second arm40comprises a proximal end portion42, a distal end portion44, and a hook46disposed at the distal end portion44. As shown, each of the hooks36/46are integrally formed with each of the first and second arm30/40. In this form, each of the hooks36/46extend in opposite directions (best shown inFIG.2B) to facilitate grasping an object such as the shaft10. However, it should be understood that the hooks36/46may extend in the same direction, which would be beneficial in applications where the mechanical device20is used to grasp a wheel and the hooks36/38extend through spokes or lug nut holes in the wheel (not shown). Additionally, the hooks36/38may take on any geometric form or have additional functionality (e.g., magnets) while remaining within the scope of the present disclosure. Referring also toFIGS.4-6, each grabber assembly24includes a hub assembly50comprising a shaft52rotationally coupled to opposed bevel gears54. In one form, the opposed bevel gears54and the ends of the shaft52comprise races56(best shown inFIG.3), which house bearings (not shown) to provide the rotational coupling around axis “X.” The shaft52further comprises a set of offset apertures60and62. The first arm30extends through one of the offset apertures60of the shaft52, and the second arm40extends through the other offset aperture62. The first arm30further comprises a first bevel gear64disposed at the proximal end portion32and engaging one of the opposed bevel gears54. Similarly, the second arm40further comprises a second bevel gear66disposed at the proximal end portion42and engaging the other opposed bevel gear54. In one form, each of the first arm30and the second arm40comprise a collar70, each of which engages a boss72on each shaft52(best shown inFIGS.4and5). As further shown, each of the first bevel gear64and the second bevel gear66comprise an end face65/67, respectively, which also engage the bosses72. Accordingly, the collars70and end faces65/67secure the first arm30and the second arm40to the shaft52. A variety of assembly approaches and component configurations may be employed to facilitate this design. For example, the first bevel gear64and the second bevel gear66may be separate pieces that are secured in place after each of the first arm30and the second arm40are inserted through the offset apertures60/62. Alternately, the shaft52may be a two-piece design (not shown), wherein each of the arms30/40are located in one half and the other half is subsequently secured around the arms30/40and the one half. Referring back toFIG.1and alsoFIG.3, the opposed bevel gears54are fixed in this form of the present disclosure. More specifically, the inner bevel gears54are secured to flanges80of a support frame82of the receiver22. The receiver22in this form comprises a center ring84opposite the platform26, which is rotationally mounted to the support frame82. The support frame82and the center ring84each includes races86, which house bearings (not shown) for the rotational movement around axis “Y” as shown. As further shown, at least one shield90extends between the opposed bevel gears54and covers at least a portion of each opposed bevel gears54. In this form, three (3) shields90are employed, which are integrally formed with the opposed bevel gears54as a single/unitized part. However, it should be understood that any number of shields may be employed, which may be separate parts or formed integrally with the opposed bevel gears54as illustrated herein, while remaining within the scope of the present disclosure. The mechanical device20further comprises a stop to limit motion of the first and second arms30/40. In this form, the stop comprises a cage94secured to the flanges80of the support frame82of the receiver22. As shown, the cage94surrounds the first and second arms30/40. More specifically, the cage94comprises a u-shaped bar that extends from one side of the cage94to the other. It should be understood that “surrounds” as used herein should be construed to mean completely surrounding as illustrated, or at least partially surrounding the first and second arms30/40. As long as the cage94functions as a stop to limit motion of the first and second arms30/40, then any form thereof should be construed as falling within the scope of the present disclosure. As further shown, the cage94is also secured to the flanges80of the support frame82of the receiver22. More specifically, in this form, the opposed bevel gears54are secured to the cage94, which is secured to the flange80. In the design illustrated, these components are secured together with bolts (not shown). However, it should be understood that other means to secure these components together, or combining these individual components into fewer parts, should be construed as falling within the scope of the present disclosure. Referring now toFIGS.7A-7Cin conjunction withFIG.1, movement of the mechanical linkage, which in this form is a grabber assembly24with bevel gears, is illustrated in greater detail. When the first arm30or the second arm40is physically placed against a fixed object100, (in this example the first arm30), and the mechanical device20is moved or translated from the receiver22, this causes movement of the mechanical linkage. More specifically, the first arm30rotates about the longitudinal axis “X” of the shaft via the rotational connection between the shaft52and the opposed bevel gears54. As the first arm30rotates, the shaft52rotates, and thus the bevel gear64rotates and engages bevel gear54. With rotation of the shaft52, the second arm40also rotates, and its bevel gear66engages the opposed bevel gear54. This rotation of the shaft52, which is caused by forces from the first arm30being displaced against the fixed object100, opens the arms30/40so that the hooks36/46can be placed around the fixed object100. Then, the mechanical device20is translated into position such that the hooks36/46are around the fixed object100, then the first arm30is placed against the fixed object again in an opposite direction to cause the first and second arms30/40to rotate in the opposite direction to close the hooks36/46around the fixed object100. To release the fixed object100, the first arm30(or the second arm40) is displaced against the fixed object100to open the hooks36/46. It should be understood that the first and second arms30/40may be displaced against any fixed object other than the fixed object100being grasped. Further, the fixed object need not be statically “fixed,” and rather the fixed object should only be rigid enough to cause movement of one of the first or second arms30/40without the fixed object moving itself. Therefore, displacement of the first arm30or the second arm40against an object causes movement of the mechanical linkage (in this form the bevel gears) and thus movement of the second arm40or first arm30, respectively. Referring now toFIGS.8-12, another form of a mechanical device according to the present disclosure is illustrated and generally indicated by reference numeral200. In this form, the mechanical device200comprises a central frame202, a first arm204and a second arm206extending through the central frame202. Each of the first arm204and second arm206comprise upper links208and lower links210. The mechanical device200further includes a pair of upper opposed receivers220, each upper opposed receiver220comprising an arm224with end portions226. Each end portion226is pivotally coupled to the upper links208of the first arm204and the second arm206. A pair of lower opposed receivers230each similarly comprise an arm232with end portions234. Each end portion234is pivotally coupled to the lower links210of the first arm204and the second arm206, respectively. In one form, each of the upper opposed receivers220and the lower opposed receivers230comprise a slot236(best shown inFIG.10). Each of the upper links208and the lower links210are disposed within a respective slot as shown so that their rotational movement can be accommodated. Similar to the previous form of the present disclosure, the mechanical device200also includes a stop to limit motion of at least one of the first arm204and the second arm206. More specifically, a cage250is secured to the central frame202. In this form, the central frame202includes opposed tabs252that are disposed within apertures254of the cage250. As further shown, the upper receiver220and the lower receiver230in this form include pins260, which are disposed within slots264of the cage250. The slots264are provided to allow some vertical “play” of the first arm204and the second arm206during operation. Accordingly, similar to the previously described form with bevel gears, displacement of the first arm204or the second arm206against an object causes rotation of the lower opposed receivers230and the upper opposed receivers220and thus movement of the second arm206or first arm204, respectively. Similar features such as the hooks36/46as described above are not repeated with this variation for purposes of brevity. Referring toFIG.13, a method of grasping an object without a power source according to the present disclosure is illustrated. The method comprises moving the mechanical device as set forth herein and engaging the first arm or the second arm against an object. The object displaces the arm being engaged with the object, which causes movement of the mechanical linkage, which causes movement of the other arm. Accordingly, each of the arms may be moved against an object to open or close the arm, thereby grasping and releasing the object. Advantageously, the mechanical linkage disposed near the proximal end portion of the first arm and the second arm kinematically couples the first arm and the second arm, wherein displacement of the first arm or the second arm against the object causes movement of the mechanical linkage, which causes movement of the second arm or first arm, respectively. Grasping an object “without a power source” as used herein should be construed to mean grasping an object by only mechanical movement of the first and second arms as illustrated and described herein. Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about” or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice, material, manufacturing, and assembly tolerances, and testing capability. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure. | 13,284 |
11858138 | In the drawings,1. trunk,2. right thigh linkage,3. right calf linkage,4. right foot,5. left thigh linkage,6. left calf linkage,7. left foot,8. right hip yaw joint,9. right hip swinging joint,10. right hip pitch joint,11. right leg knee joint,12. right leg ankle joint,13. left hip yaw joint,14. left hip swinging joint,15. left hip pitch joint,16. left leg knee joint,17. left leg ankle joint,18. initial standing state,19. supporting phase of left single leg,20. in-air phase of left leg,21. single leg supporting phase of right leg,22. single leg supporting phase of right leg,22. in-air phase of right leg,23. equivalent linkage,24. equivalent center of mass,25. movement trajectory of swinging leg. DESCRIPTION OF EMBODIMENTS The present disclosure will be further described below with reference to the drawings and examples. The present disclosure proposes a method for realizing a dynamic running gait of a biped robot on a rough terrain road, is the method constructs a hybrid inverted pendulum (HIP) model for controlling a dynamic running of the biped robot on the rough terrain road, and the HIP model is a combination of a linear inverted pendulum and a spring loaded inverted pendulum phase, in order to further explain the method. The HIP model refers to a model in which a torso of the robot is simplifies as a center of mass that concentrates all masses, a leg of the robot is simplified to a retractable linkage without mass and inertia that connects the torso and a foot, and the center of mass is constrained to move in a constrained plane. When the robot changes from an in-air phase to a landing phase, there is a large impact force between the robot and the ground. The adopted HIP model shows characteristics of the SLIP model, and the center of mass is compressed to cushion the impact force of the ground. When the robot is stable on the ground, the HIP model shows characteristics of the LIP model, and the center of mass of the robot is controlled to a set height. A state machine of the biped robot refers to dividing a stable advancing process of the robot into four states according to whether the left leg and right leg are in a supporting phase, namely a supporting phase of the left leg, an in-air phase of the left leg, a supporting phase of the right leg, and an in-air phase of the right leg. A stable and periodic switching of the state machine forms a running gait of the biped robot, and a corresponding controller is set in each state to realize a balance control of the robot and a motion control of the swinging leg. As shown inFIGS.1-7, the method for realizing the running gait of the biped robot according to the present disclosure includes adopting the HIP model to realize the balance control of the biped robot and using movement trajectory planning of the supporting leg and the swinging leg to realize the running gait of the biped robot. The structure of the biped robot is shown inFIG.1, and the biped robot includes a torso1and lower limbs. The lower limbs include a right thigh linkage2, a right calf linkage3, a right foot4, a left thigh linkage5, a left calf linkage6, a left foot7, a right hip yaw joint8, a right hip swinging joint9, a right hip pitch joint10, a right leg knee joint11, a right ankle joint12, a left hip yaw joint13, a left hip swinging joint14, a left hip pitch joint15, a left leg knee joint16, and a left leg ankle joint17. The torso1is equipped with an inertial measurement unit configured to measure posture information of the body, and the left foot7and the right foot4each are equipped with a force sensor configured to measure a contact force between the foot and the ground. The control method is described in detail below. The HIP model is shown inFIG.3, the torso1of the robot is simplified as a center of mass24that concentrates all masses, the legs of the robot is simplified as a retractable linkage23without mass and inertia that connects the torso and the feet, and the center of mass is constrained to move in a constrained plane. The state machine of the biped robot refers to dividing the stable advancing process of the robot into four states according to whether the left and right legs are in the supporting phase, namely a supporting phase19of the left leg, an in-air phase20of the left leg, a supporting phase21of the right leg, and an in-air phase22of the right leg, which are as shown inFIG.2. The biped robot starts to jump into a loop of the state machine from an initial standing state18, and sets a corresponding controller in the corresponding state to achieve the balance of the torso of the body and the movement planning of the swinging leg. The state machine has a unique jumping direction, and a stable and periodic switching of the state machine forms the stable advancing process of the biped robot. The balance control of the biped robot includes a balance control of the body posture, a balance control of the height of the center of mass, and a control of an advancing speed of the robot. This method avoids the limitations of adopting the ZMP stability criterion, and successfully achieves a stable running gait of the biped robot. The use of small feet can achieve dynamic balance and stability, so that the biped robot can achieve capabilities of stably running and walking of the robot on the rough terrain road. The balance control of the body posture, namely controlling the posture angle of the torso of the robot body to be maintained near a stable range, during the single-leg supporting phase of the biped robot, adopts a classic body posture balance control strategy, and uses PD control to achieve the balance of the torso1. The specific posture balance control formula is as follows: τh=Kp(qd−q)+Kd({dot over (q)}d−{dot over (q)})+τf,where τhdenotes a balance torque of the torso; qddenotes a desired body posture angle; q denotes a body posture angle; {dot over (q)}ddenotes a desired body posture angular velocity; {dot over (q)} denotes a body posture angular velocity; Kpand Kdrespectively denote corresponding feedback coefficient matrices to be determined that are related to the actual biped robot platform; τfdenotes a feedforward torque generated by a gravity of the center of mass applied on a hip joint of the supporting leg. τhdenotes a balance torque of the torso; qddenotes a desired body posture angle; q denotes a body posture angle; {dot over (q)}ddenotes a desired body posture angular velocity; {dot over (q)} denotes a body posture angular velocity; Kpand Kdrespectively denote corresponding feedback coefficient matrices to be determined that are related to the actual biped robot platform; τfdenotes a feedforward torque generated by a gravity of the center of mass applied on a hip joint of the supporting leg; and the hip joint of the supporting leg is used to generate τfto achieve the balance of the torso of the robot. The balance control of the height of center of mass, namely controlling the center of mass of the robot to move in a constrained plane parallel to the ground, based on the theory of the LIP model, achieves the balance of the height of the center of mass of the body by controlling the force supplied by the ground. As shown inFIG.3, the balance control of the height of the center of mass is achieved by controlling Fz, which is set as the following formula: Fz=Kpf(hset−h)+Kdf(−vh)+Mg,where hsetdenotes a set height of the center of mass; h denotes an actual height of the center of mass; vhdenotes a velocity of the center of mass in an upright direction; Kpfand Kdfdenote coefficients to be determined; M denotes a weight of the center of mass of the robot; and g denotes an acceleration of gravity. Through this formula, the center of mass of the robot is always stable at the set height. The control of advancing speed of the robot refers to controlling the speed of the center of mass of the biped robot to approach a desired speed or maintain stable at the desired speed by the foothold of the biped robot, and controlling the robot speed by coordinates of the foothold, namely a step length in the advancing direction of the robot. The formula of specific coordinate of the foothold is as follows: Lf=K0v+Kpvv+Kdv(v−vd),where Lfdenotes the coordinate of the foothold; v denotes a speed of the robot; vddenotes the desired speed; K0v, Kpv, and Kdvall denote coefficient matrices to be determined that are related to a time for supporting phase of a single leg and the height of the center of mass. The coordinate of the foothold is set as a coordinate of an end point of the swinging leg swinging in a gait cycle, so as to realize the control of speed of the center of mass by the foothold. The movement trajectory planning of the supporting leg and the movement trajectory planning of swinging leg refers to using information planning to perform a contraction of the supporting leg and a stretching of the swinging leg to realize the in-air phase of both legs of the biped robot. Taking a case in which the supporting phase of single right leg is switched into the supporting phase of single left leg as an example, as shown inFIG.4, a moment when the supporting phase of a single leg of the biped robot is to be ended is defined as the beginning of a gait cycle, the phase information is set to be zero, the supporting leg is interchangeable with the swinging leg so that the right leg becomes the swinging leg and the left leg becomes the supporting leg. The control program of the biped robot controls the movement of the supporting leg and the swinging leg to achieve air phase of the biped robot. The formula of the swinging trajectory of both legs of the robot is as follows: xsw={xsp≤Δpfswx(xs,xf,p,T)p>Δp,zsw={ΔhpΔp+zs(1-pΔp)p≤Δpfswz(Δh,zf,p,T)p>Δp,andzsu=zsuspΔp+zsuf(1-pΔp)p≤Δp,where xswand zswdenote coordinates of an end point of the swinging leg; zsudenotes an ordinate of an end point of the supporting leg; xsand zsdenote initial coordinates of the end point of the swinging leg; xfand zfdenote set coordinates of the end point of the swinging leg; Δh denotes a set lifting height of a leg; p denotes set phase information that is positively correlated with an execution time for current gait, and Δp denotes a set phase duration of the in-air phase; T denotes a stride cycle; fswx(xs, xf, p, T) and fswz(Δh, zf, p, T) denote planning curves of the swinging leg; and zsusand zsufrespectively denote a starting ordinate and a set ordinate of the end point of the supporting leg. According to the swinging trajectory of both legs of the robot, the robot controls the supporting leg to move downwards at the beginning of one gait cycle, and controls the swinging leg to lift upward. When the upward contraction speed of the swinging leg is greater than the vertical downward component of the velocity of the center of mass, the end of the supporting leg generates a downward speed while the end of the swinging leg generates an upward speed. When the contraction speed of the swinging leg of the robot is faster, the swinging leg has been lifted, while the supporting leg has not yet fallen to the ground, so that the robot is in a state in which its both feet are in the air, which is shown inFIG.5. The air phase of this method is generated by the rapid switching state of the current supporting leg, thus reducing the output torque requirements of the robot knee joints. This method is also applicable to those robots whose joint performance is constrained by motor capabilities. When the robot is in the in-air phase of both legs, the influence of air resistance can be ignored, the center of mass of the robot would maintain the current movement, and the robot would continue to maintain its advancing speed in the forward direction. When the supporting leg touches the ground or the phase information p>Δp is satisfied, the robot switches the state machine from the in-air phase of both legs into the supporting phase of a single leg, which is shown inFIG.6. At the same time, the control program will control the swinging leg to move according to the set swinging trajectory25, which is determined by fswx(xs, xf, p, T), fswz(Δh, zf, p, T), and the above method is used to implement the corresponding control strategy for the supporting leg of the biped robot based on the HIP model. This cycle reciprocates to form stable running gaits of the biped robot. In general, compared with the LIP model or the SLIP model, the control strategy based on the HIP model greatly reduces the requirements for the mechanical structure design and joint performance of the biped robot. The control algorithm can be applied to most biped robots, especially those motor-driven biped robots, and has better flexibility and versatility. | 12,753 |
11858139 | DETAILED DESCRIPTION Overview The systems and methods herein involve a manipulator system including a robotic manipulator, a controller, a support structure, and one or more sensors. The support structure may be any type of deformable or non-rigid support surface designed to support an object on an upper surface of the support structure. In one example, the support structure may be both deformable and non-even, such that portions of the support structure are positioned higher than other portions of the support structure. For example, a support structure may have an “egg-crate” type geometry with peaks and valleys. The support structure may be made from any well-known deformable or non-rigid material (e.g., foam, beads, liquid). The support structure may be positioned in close proximity to a robotic manipulator. The robotic manipulator may include one or more linked arms that may have multiple degrees of freedom and may include a gripper at a distal end for gripping objects. The arms may be articulating arms. The gripper may include two or more finger extensions, for example, for griping objects (e.g., clamping objects between the two finger extensions). However, any type of well-known gripper may be used. The robotic manipulator may have a reach that extends to the support structure. The robotic manipulator may include or may be in communication with a controller. The controller may also be in communication with one or more sensors. For example, a sensor may be any visual or light sensor (e.g., any type of camera) a stereo sensor, a color (RGB) sensor, a Lidar sensor, and/or any other type of sensor for sensing light, depth, and/or colors. The sensor may generate sensor data regarding the support structure and/or the object and may communicate the sensor data to the controller. The controller may analyze the sensor data and/or any other information known about the support surface and/or object and may determine an optimal approach for griping and moving the object to a new location. The controller may analyze sensor data regarding the support structure and/or may be programmed to know information about the support structure (e.g., compression information), and may use this information to determine one or more movements of the robotic manipulator, including an angle of the manipulator and/or gripper and a force to be used for such movements. The controller may further determine contact locations on the object that the robotic manipulator should contact the object. The manipulator system described herein may provide an optimal setting for griping a surface of an object. Specifically, the non-planar nature of the support structure may provide voids under a surface of an object into which the gripper may extend into to grip the bottom surface of the object. Further, the deformable nature of the support structure may permit the gripper to position itself below the bottom surface of the object. As the controller may know or determine the type of material and/or geometry of the support structure, the controller may predict how the support surface will interact with the robotic manipulator. In this manner, the robotic manipulator system may grip and move objects that traditional robotic manipulator systems would not be able to grip and/or move. Referring now toFIG.1, an exemplary use case of robotic manipulator system100is illustrated in accordance with one or more example embodiments of the disclosure. As is shown inFIG.1, robotic manipulator system100may include support structure102, robotic manipulator101, controller104and sensor110. The controller104may optionally communicate with computing device130which may be a remote computing device. Also shown inFIG.1, support structure102may support object112. In this example, object112may be a book, though it is understood that support structure102may support any type of object and/or multiple objects. Support structure102may be any type of deformable or non-rigid support surface designed to support an object on an upper surface of the support structure102. Support structure102may be deformable and/or non-even, such that portions of the support structure may be positioned higher than other portions of the support structure. As shown inFIG.1, support structure102may have an “egg-crate” type geometry with peaks and valleys. The support structure may be made from any well-known deformable or non-rigid material (e.g., foam, beads, liquid). The support structure may return to its original shape after being deformed. Robotic manipulator101may include one or more arms (e.g., arm105) and may include a gripper108at the distal end. The arms (e.g., arm105) may be articulating arms. The gripper108may be designed to open and close to grip an object such that the robotic manipulator101may grip an object, move an object, and release the object at a new location. As shown in close-up view107, gripper may include one or more finger extensions114that may make contact with the object. However, any type of well-known gripping or grasper may be used. The robotic manipulator101may have a reach that extends to the support structure. The robotic manipulator101may include or may be in communication with controller104which may be separate from or incorporated into robotic manipulator101and/or may be in wired communication or wireless communication with robotic manipulator105. The controller104may further communicate with sensor110and optionally computing device130via any wired or wireless communication technology. Controller104may be any computing device with a processor. For example, controller104may be a laptop, desktop computer, server, or even a smart phone, tablet, or the like. Controller104may run one or more local applications to facilitate operation of the robotic manipulator and/or sensor, and/or may perform such operations together with one or more other computing devices or servers that may otherwise process instructions and/or perform operations or tasks described herein. Sensor110may be any visual or light sensor (e.g., any type of camera) a stereo sensor, a color (RGB) sensor, a greyscale sensor, a Lidar sensor, proximity sensor and/or any other type of sensor for sensing light, depth, proximity and/or colors. The sensor110may generate sensor data regarding the support structure and/or the object and may communicate the sensor data (e.g., RGB and/or stereo data) to the controller104. It is understood that more than one sensor may be included in robotic manipulator system100. As shown in close-up view107, finger extension114of gripper108may be positioned under object112. As support structure102is deformable, robotic manipulator101and specifically finger extension114may make contact with support structure102and specifically support portion116. Support portion116may be a domed shaped component of support structure102and may compress and/or partially flatten as gripper108and finger extension114advance toward object112. The deformation of support portion116may permit gripper108to be positioned under object112. Further the voids present in support structure102permit gripper108and finger extension114to be positioned below object112with minimal contact with object112. To initiate the actions of employing the robotic manipulation system shown inFIG.1, an example process flow120is presented and may be performed, for example, by one or more modules at controller104. For example, the controller may include at least one memory that stores computer-executable instructions and at least one processor configured to access the at least one memory and execute the computer-executable instructions to perform various actions or operations, such as one or more of the operations in the process flow120ofFIG.1. At block122, a location of the object112with respect to the support structure102and/or the robotic manipulator101may be determined. For example, image and/or light data generated by sensor101may be processed to make this determination. At block124, a surface type and/or properties corresponding to the support structure may be determined. For example, controller104may know about the surface geometry and/or compression properties of support structure. Alternatively, image and/or light data corresponding to the structure may be analyzed to determine the support structure matches a support structure known to controller104with known properties and/or surface geometries. In one example, controller may even maintain or have access to a three-dimensional model of the support structure102. At block126, an angle of the gripper108and/or advancement or movement force for the gripper108be determined. This determination may be based on the location of the object and information known about the support surface102(e.g., compression properties and/or geometric of the surface). The angle may be the angle of the gripper108with respect to a plane of support surface102. The force may be the force applied to arm105to move it in any direction. At block128, the robotic manipulator may cause the robotic manipulator to retrieve the item using the angle and the force determined at block126. Illustrative Process and Use Cases Referring now toFIGS.2A-2Dvarious cross-sections of exemplary support structures are depicted in accordance with one or more example embodiments of the disclosure. It is understood that support structures in the present robotic manipulation system may take any shape and may even be planar. Further, it is understood that the support structure may be uniform with repeating patterns or may be non-uniform with varying patterns. It is understood that the height of the peaks and the depth of the valleys may be varied. As shown inFIG.2A, support structure202may include a plurality of triangular shaped peaks that form cones or pyramids in three-dimensional space. The triangular shaped peaks may offer the smallest contact points with the objects and may offer large voids in the area closest to the objects, as compared to other geometries. As shown inFIG.2B, support structure204may include a plurality of circular peaks that form domes in three-dimensional space. For example, support structure204may appear similar to support structure102illustrated inFIG.1. The domed shaped peaks may offer relatively small contact points with the object and may maximize the void closest to the object. As shown inFIG.2C, support structure206may include a plurality of square or rectangular peaks that form cubes in three-dimensional space. For example, each cube peak may be adjacent on all sides to an equally sized cubic void. The cubed shaped peaks may offer a significant amount of support for object and may be used for heavy objects, for example. As shown inFIG.2D, support structure208may even be generally flat, without any peaks and without any valleys. In one example, a container may be filled with beads210or any other structure that may be moved or displaced. For example, beads210may be rigid but may be displaced to permit finger extensions of a gripper of a robotic manipulator to become submerged below the surface of the beads. Referring now toFIGS.3A-3B, multi-layered support structures with varying compression properties are depicted, in accordance with one or more example embodiments of the disclosure. As shown inFIG.3A, support structure302includes domed peaks and domed troughs interspaced between the domed peaks. The shape and size of the peaks and troughs may be uniform throughout or may vary. As shown inFIG.3A, support structure302may be formed from several layers of materials. In one example, support structure302may include three layers of compressible foam. Each layer of compressible foam may have a different compression value (e.g., a value indicative of a degree or amount of compression for a given force). For example, layer104may be the most compressible, layer308may be the least compressible (e.g., most firm), and layer306may have a compression value between layer304and layer308. It is understood that there may be greater or fewer layers than those shown inFIG.3A, and each layer may have a different shape, size, and/or compression value. Referring now toFIG.3B, support structure310which may be an even and/or planar surface that may be formed from several layers of materials. In one example, support structure310may include three layers of compressible foam. Each layer of compressible foam may also be even and/or planar and may have a different compression value than the other layers. For example, layer312may be the most compressible, layer316may be the least compressible (e.g., most firm), and layer314may have a compression value between layer312and layer316. It is understood that there may be greater or fewer layers than those shown inFIG.3A, and each layer may have a different shape, size, and/or compression value than each other layer. Referring now toFIGS.4A-4B, schematic illustrations of an exemplary support structure and representation thereof are depicted in accordance with one or more example embodiments of the disclosure. Referring now toFIG.4A, an exemplary point cloud representation402of a support structure is illustrated. As explained above with respect toFIG.1, a robotic manipulator system may include one or more sensors focused on and/or oriented towards the support structure. The sensor or sensors may be any visual or light sensor (e.g., any type of camera) a stereo sensor, a color (RGB) sensor, a greyscale sensor, a Lidar sensor, and/or any other type of sensor for sensing light, depth, and/or colors. The sensor may generate sensor data regarding the support surface and/or an object positioned on the support structure and may communicate the sensor data to the controller. The sensor data generated by the one or more sensors may be processed by a point cloud module (e.g., on the controller and/or a computing device in communication with the controller) to determine a three-dimensional point cloud corresponding to the sensor data. For example, the sensor data may be RGB data, stereo data, greyscale data and/or depth data. The point cloud module may process the three-dimensional point cloud (e.g., using segmentation algorithms) to determine perspective views of the depth point cloud (e.g., top down views) and may further segment the point cloud resulting in depth segmentation. In this manner, the controller and/or any other computing device in communication with the controller may process the sensor data to determine a three-dimensional point cloud and may process the point cloud using a depth segmentation module to segment the depth point cloud. The controller and/or computing device may process the sensor data using one or more algorithms and/or models that are designed and/or trained to determine a three-dimensional representation of the sensor data such as a point cloud. For example, the three-dimensional representation of the support structure may be point cloud representation402. As shown inFIG.4A, the point cloud representation402may be made up of points404arranged in three-dimensional space to form the representation. The point cloud representation may be used to determine geometry, size and/or shape information regarding the support structure. Further, the point cloud representation may be used to match the support structure to known support structures. For example, the controller and/or computing device may maintain a library of known support structures that may be associated with certain geometries and/or other properties. The controller and/or library may also maintain a library of three-dimensional models of known support structures. Referring now toFIG.4B, the controller and/or computing device in communication with the controller may analyze the sensor data and/or point cloud data to determine the geometry of the support structure and any voids defined by the support structure and/or object. For example, support structure405may include a domed peaks and valleys that may be uniformly distributed. A side view of the support structure and/or point cloud representation of the support structure may be analyzed to determine the size and shape of the peaks and valleys and may further be analyzed to determine one or more voids defined by an object positioned on the support structure (e.g., object410) and the support structure405. For example, controller and/or computing device in communication with the controller may determine points along the surface to define the void. In one example, peaks412and422, center points414and420and bottom point416of the support structure peaks and valleys may be determined based on the sensor data and/or point cloud data. Further, a lower boundary point424of object410may be identified to fully define void408. An upper boundary point426of object410may also be identified to determine an upper boundary of the object410. The points defining the void may be used to determine a center point425of the void408. If the support structure is uniform, the geometry, including the center point, of other voids may be estimated. It is understood that any other well-known technique or approach may be used to determine and/or define the void of the support structure. In one example, the robotic manipulator system may align the robotic manipulator with the center point425. Referring now toFIGS.5, a schematic illustration of an exemplary object and representation of the object is illustrated in accordance with one or more example embodiments of the disclosure. Object502may be positioned on a support structure (e.g., support structure102ofFIG.1) and sensor data may be generated with respect to object502, as described above with respect toFIG.4A. As explained above, the sensor data may be processed (e.g. by a point cloud module running on the controller and/or computing device in communication with the controller) to generate point cloud504. Point cloud representation504may be generated at the same time as the point cloud generated for the support structure or may be generated at a different time. Further, the sensor data used to generate the point cloud representation of the support structure may be the same or different than the sensor data used to generate point cloud representation504of object502. Similar to the point cloud representation of the support structure, point cloud representation504of object502may be used to determine geometry, size and/or shape information regarding the object. Referring now toFIG.6, a technique for generating box representation of an object is depicted, in accordance with one or more example embodiments of the present disclosure. The point cloud representation of the object may be used to estimate the size and shape of the object which may be useful for determining where and how to grasp the object using the robotic manipulator. In one example, it may be useful to simplify the shape of the object. For example, the point cloud representation may be used to determine a bounding box of the object. In one example, the minimal volume bounding box (MVBB) technique may be employed. In the bounding box approach, a point cloud representation602of the object, which may be the same a cloud representation504ofFIG.5, may be processed to determine one or more boxes with minimal volume that are used to estimate the surface, shape and size of the object and/or point cloud representation of the object. As shown inFIG.6, box representation606may be a box representation of the point cloud representation602of the object. The box representation606may simplify the shape of the point cloud representation602of the object. For example, the point cloud representation602may reflect an indentation608of the object. However, the box representation may simplify this shape with a standard rectangle. A reference frame of the object and point cloud representation of the object based on the shape and/or geometry of the object (e.g., reference frame604), may correspond to reference frame610the box representation606. Referring now toFIG.7, an approach for determining a plurality of contact points on the object and/or a representation of the object is illustrated, in accordance with one or more example embodiments of the present disclosure. As shown inFIG.7, a bounding box702may be determined from an object and/or point cloud representation. To determine the best contact points for grasping the object, the controller and/or a computing device in communication with the controller may position a representation of the gripping device, using known dimensions, to determine the optimal location on the object to grasp the object. In one example, gripping representations704, gripping representations708, and gripping representations706may be considered to determine possible gripping locations on a left side of the object, top side of the object, and front side of the object. The gripping location that fits within the gripping constraints and/or minimizes contact with a support structure may be determined to be the optimal gripping position. It is understood that the geometry of the support structure may be known and such information may inform the optimal gripping positions. FIGS.8A-8Billustrate an exemplary robotic manipulator interfacing with a support structure at various angles, in accordance with one or more example embodiments of the present disclosure. As shown inFIG.8A, when the robotic manipulator802approaches object805with an angle806that is normal to a plane of the support structure804, a gripper at the distal end of the robotic manipulator802may interfere significantly with the peaks of the support structure. The degree of compression resulting from the interference may cause the object805to move away from robotic manipulator802and/or may result in resistive forces from the support structure acting on the robotic manipulator that are relatively large and may be difficult to predict. Further such force may damage the support structure. Referring now toFIG.8B, robotic manipulator802may approach support structure804at angle810that is less than 90 degrees from the plane that is normal to the longitudinal plane of the support structure. Such an angle of the robotic manipulator802may permit the gripper of the robotic manipulator802to enter a void of the support structure and otherwise avoid significant interface with the support structure804. As shown inFIG.8B, the interference and resulting compression808of the support structure804when the robotic manipulator802is oriented at such an angle is significantly less than the compression807inFIG.8A, when the robotic manipulator802approaches at angle806that is normal to a plane of the support structure804. FIG.9depicts exemplary process flow900for determining how and where to grasp the object using the robotic manipulator, in accordance with one or more example embodiments of the disclosure. Some or all of the blocks of the process flows in this disclosure may be performed in a distributed manner across any number of devices (e.g., computing devices and/or servers). Some or all of the operations of the process flow may be optional and may be performed in a different order. To employ process flow900, at block902computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine sensor data of the object and/or the support structure. As explained above, sensor data may be RGB data, stereo data, greyscale data, depth data, and the like. At block904, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to generate a point cloud of the object and optionally the support structure (e.g., using the point cloud module). At block906, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine the geometry of the support structure. For example, a side view of the support structure may be analyzed. Alternatively, or additionally, the support structure may be matched with a known model (e.g., a known 3-D model). At block908, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine support structure properties (e.g., compression value). This may either be known, input, or determined based on matching the support structure with a known support structure. In one example, the robotic manipulator may include one or more motion sensors (e.g., accelerometers) and motion data generated by the interaction between the robotic manipulator and the support structure may be used to determine and/or inform the structure properties (e.g., a compression value). At block910, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to estimate the shape of the object. For example, the minimal volume bounding box (MVBB) technique may be used. At block912, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine a location of the object with respect to the robotic manipulator and/or the support surface. The point cloud or other sensor data may be analyzed to make this determination. At block914, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine contact points on the object based on shape of object, location of object, and/or support surface geometry, shape and/or properties (e.g., compression value). At block916, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine an optimum contact point for grasping the object (e.g., the contact point that minimizes interference with the support structure and maximizes grip). At block918, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to determine manipulator properties (e.g., movement force, gripper width, gripper angle) based on the optimum contact point and support structure properties. At block920, computer-executable instructions stored on a memory of a device, such as a controller and/or a computing device in communication with the controller, may be executed to cause the robotic manipulator to grasp the object based on the properties determined block918. Illustrative Device Architecture FIG.10is a schematic block diagram of a controller1000in accordance with one or more example embodiments of the disclosure. The controller1000may be a computing device in communication with a robotic manipulator, one or more sensor, and/or a computing device (e.g., computing device1001). The computing device may be in communication with other computing devices (e.g., computing device1001) which may optionally perform one or more of the tasks described herein together with the computing device. Controller1000may correspond to controller104ofFIG.1and/or any other controller ofFIGS.1-9. The controller1000may be configured to communicate via one or more networks with one or more computing devices, servers, electronic devices, user devices, or the like. Example network(s) may include, but are not limited to, any one or more different types of communications networks such as, for example, cable networks, public networks (e.g., the Internet), private networks (e.g., frame-relay networks), wireless networks, cellular networks, telephone networks (e.g., a public switched telephone network), or any other suitable private or public packet-switched or circuit-switched networks. Further, such network(s) may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), metropolitan area networks (MANs), wide area networks (WANs), local area networks (LANs), or personal area networks (PANs). In addition, such network(s) may include communication links and associated networking devices (e.g., link-layer switches, routers, etc.) for transmitting network traffic over any suitable type of medium including, but not limited to, coaxial cable, twisted-pair wire (e.g., twisted-pair copper wire), optical fiber, a hybrid fiber-coaxial (HFC) medium, a microwave medium, a radio frequency communication medium, a satellite communication medium, or any combination thereof. In an illustrative configuration, the controller1000may include one or more processors (processor(s))1002, one or more memory devices1004(generically referred to herein as memory1004), one or more of the optional input/output (I/O) interface(s)1006, one or more network interface(s)1008, one or more transceivers1012, and one or more antenna(s)1034. The controller1000may further include one or more buses1018that functionally couple various components of the controller1000. The controller1000may further include one or more antenna(e)1034that may include, without limitation, a cellular antenna for transmitting or receiving signals to/from a cellular network infrastructure, an antenna for transmitting or receiving Wi-Fi signals to/from an access point (AP), a Global Navigation Satellite System (GNSS) antenna for receiving GNSS signals from a GNSS satellite, a Bluetooth antenna for transmitting or receiving Bluetooth signals including BLE signals, a Near Field Communication (NFC) antenna for transmitting or receiving NFC signals, a 1000 MHz antenna, and so forth. These various components will be described in more detail hereinafter. The bus(es)1018may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the controller1000. The bus(es)1018may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The bus(es)1018may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth. The memory1004of the controller1000may include volatile memory (memory that maintains its state when supplied with power) such as random access memory (RAM) and/or non-volatile memory (memory that maintains its state even when not supplied with power) such as read-only memory (ROM), flash memory, ferroelectric RAM (FRAM), and so forth. Persistent data storage, as that term is used herein, may include non-volatile memory. In certain example embodiments, volatile memory may enable faster read/write access than non-volatile memory. However, in certain other example embodiments, certain types of non-volatile memory (e.g., FRAM) may enable faster read/write access than certain types of volatile memory. In various implementations, the memory1004may include multiple different types of memory such as various types of static random access memory (SRAM), various types of dynamic random access memory (DRAM), various types of unalterable ROM, and/or writeable variants of ROM such as electrically erasable programmable read-only memory (EEPROM), flash memory, and so forth. The memory1004may include main memory as well as various forms of cache memory such as instruction cache(s), data cache(s), translation lookaside buffer(s) (TLBs), and so forth. Further, cache memory such as a data cache may be a multi-level cache organized as a hierarchy of one or more cache levels (L1, L2, etc.). The data storage1020may include removable storage and/or non-removable storage including, but not limited to, magnetic storage, optical disk storage, and/or tape storage. The data storage1020may provide non-volatile storage of computer-executable instructions and other data. The memory1004and the data storage1020, removable and/or non-removable, are examples of computer-readable storage media (CRSM) as that term is used herein. The data storage1020may store computer-executable code, instructions, or the like that may be loadable into the memory1004and executable by the processor(s)1002to cause the processor(s)1002to perform or initiate various operations. The data storage1020may additionally store data that may be copied to memory1004for use by the processor(s)1002during the execution of the computer-executable instructions. Moreover, output data generated as a result of execution of the computer-executable instructions by the processor(s)1002may be stored initially in memory1004, and may ultimately be copied to data storage1020for non-volatile storage. More specifically, the data storage1020may store one or more operating systems (O/S)1022; one or more optional database management systems (DBMS)1024; and one or more program module(s), applications, engines, computer-executable code, scripts, or the like such as, for example, one or more implementation module(s)1026, one or more point cloud module(s)1027, one or more communication module(s)1028, one or more support structure module(s)1029, and/or one or more manipulator module(s)1030. Some or all of these module(s) may be sub-module(s). Any of the components depicted as being stored in data storage1020may include any combination of software, firmware, and/or hardware. The software and/or firmware may include computer-executable code, instructions, or the like that may be loaded into the memory1004for execution by one or more of the processor(s)1002. Any of the components depicted as being stored in data storage1020may support functionality described in reference to correspondingly named components earlier in this disclosure. The data storage1020may further store various types of data utilized by components of the controller1000. Any data stored in the data storage1020may be loaded into the memory1004for use by the processor(s)1002in executing computer-executable code. In addition, any data depicted as being stored in the data storage1020may potentially be stored in one or more datastore(s) and may be accessed via the DBMS924and loaded in the memory1004for use by the processor(s)1002in executing computer-executable code. The datastore(s) may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. The processor(s)1002may be configured to access the memory1004and execute computer-executable instructions loaded therein. For example, the processor(s)1002may be configured to execute computer-executable instructions of the various program module(s), applications, engines, or the like of the controller1000to cause or facilitate various operations to be performed in accordance with one or more embodiments of the disclosure. The processor(s)1002may include any suitable processing unit capable of accepting data as input, processing the input data in accordance with stored computer-executable instructions, and generating output data. The processor(s)1002may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s)1002may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor(s)1002may be capable of supporting any of a variety of instruction sets. Referring now to functionality supported by the various program module(s) depicted inFIG.11, the implementation module(s)1026may include computer-executable instructions, code, or the like that are responsive to execution by one or more of the processor(s)1002may perform functions including, but not limited to, overseeing coordination and interaction between one or more modules and computer executable instructions in data storage1020and/or determining user selected actions and tasks. Implementation module1026may further coordinate with communication module1028to send messages to and receive messages from a remote computing device. The point cloud module(s)1027may include computer-executable instructions, code, or the like that are responsive to execution by one or more of the processor(s)1002may perform functions including, but not limited to, generating point cloud representations and performing segmentation. Point cloud representations may be generated from the sensor data. Point cloud module may determine point cloud representations using any well-known techniques for generating point clouds. The communication module(s)1028may include computer-executable instructions, code, or the like that are responsive to execution by one or more of the processor(s)902may perform functions including, but not limited to, communicating with one or more computing devices, for example, via wired or wireless communication, communicating with computing devices, communicating with one or more servers (e.g., remote servers), communicating with remote datastores and/or databases, sending or receiving notifications or commands/directives, communicating with cache memory data, and the like. The support structure module(s)1029may include computer-executable instructions, code, or the like that are responsive to execution by one or more of the processor(s)902may perform functions including, but not limited to, maintaining a database or library of known support structures, known support structure properties (e.g., compression values, geometries, patterns and/or internal composition), and/or may maintain various three-dimensional models of known support structures. The manipulator module(s)1030may include computer-executable instructions, code, or the like that are responsive to execution by one or more of the processor(s)1002may perform functions including, but not limited to, cause a robotic manipulator to move in various directions, at various forces, and/or at various angles. Manipulator module1030may further control the gripper of a robotic manipulator to open a certain distance, to close with a certain force, and to otherwise move as described herein. Referring now to other illustrative components depicted as being stored in the data storage1020, the O/S1022may be loaded from the data storage1020into the memory1004and may provide an interface between other application software executing on the controller1000and hardware resources of the controller1000. More specifically, the O/S1022may include a set of computer-executable instructions for managing hardware resources of the controller1000and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the O/S1022may control execution of the other program module(s) for content rendering. The O/S1022may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system. The optional DBMS1024may be loaded into the memory1004and may support functionality for accessing, retrieving, storing, and/or manipulating data stored in the memory1004and/or data stored in the data storage1020. The DBMS1024may use any of a variety of database models (e.g., relational model, object model, etc.) and may support any of a variety of query languages. The DBMS1024may access data represented in one or more data schemas and stored in any suitable data repository including, but not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. Referring now to other illustrative components of the controller1000, the input/output (I/O) interface(s)1006may facilitate the receipt of input information by the controller1000from one or more I/O devices as well as the output of information from the controller1000to the one or more I/O devices. The I/O devices may include any of a variety of components such as various sensors for navigation as well as the light emitting assembly described herein. The I/O devices may also optionally include a display or display screen having a touch surface or touchscreen; an audio output device for producing sound, such as a speaker; an audio capture device, such as a microphone; an image and/or video capture device, such as a camera; and so forth. Any of these components may be integrated into controller1000or may be separate. The optional I/O interface(s)906may also include an interface for an external peripheral device connection such as universal serial bus (USB), FireWire, Thunderbolt, Ethernet port or other connection protocol that may connect to one or more networks. The optional I/O interface(s)906may also include a connection to one or more of the antenna(e)934to connect to one or more networks via a wireless local area network (WLAN) (such as Wi-Fi®) radio, Bluetooth, ZigBee, and/or a wireless network radio, such as a radio capable of communication with a wireless communication network such as a Long Term Evolution (LTE) network, WiMAX network, 3G network, ZigBee network, etc. The controller1000may further include one or more network interface(s)1008via which the controller1000may communicate with any of a variety of other systems, platforms, networks, devices, and so forth. The network interface(s)1008may enable communication, for example, with one or more servers, computing devices, one or more wireless routers, one or more host servers, one or more web servers, and the like via one or more of networks. The transceiver(s)1012may have the same or substantially the same features, operation, and/or functionality as described above with respect to transceiver(s)912. It should be appreciated that the program module(s), applications, computer-executable instructions, code, or the like depicted inFIG.10as being stored in the data storage1020, are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple module(s) or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the controller1000and/or hosted on other computing device(s) accessible via one or more networks, may be provided to support functionality provided by the program module(s), applications, or computer-executable code depicted inFIG.10and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program module(s) depicted inFIG.10may be performed by a fewer or greater number of module(s), or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program module(s) that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program module(s) depicted inFIG.10may be implemented, at least partially, in hardware and/or firmware across any number of devices. It should further be appreciated that the controller1000may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the controller1000are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program module(s) have been depicted and described as software module(s) stored in data storage1020and/or data storage1020, it should be appreciated that functionality described as being supported by the program module(s) may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned module(s) may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other module(s). Further, one or more depicted module(s) may not be present in certain embodiments, while in other embodiments, additional module(s) not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain module(s) may be depicted and described as sub-module(s) of another module, in certain embodiments, such module(s) may be provided as independent module(s) or as sub-module(s) of other module(s). Program module(s), applications, or the like disclosed herein may include one or more software components including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). Software components may invoke or be invoked by other software components through any of a wide variety of mechanisms. Invoked or invoking software components may comprise other custom-developed application software, operating system functionality (e.g., device drivers, data storage (e.g., file management) routines, other common routines and services, etc.), or third party software components (e.g., middleware, encryption, or other security software, database management software, file transfer or other network communication software, mathematical or statistical software, image processing software, and format translation software). Software components associated with a particular solution or system may reside and be executed on a single platform or may be distributed across multiple platforms. The multiple platforms may be associated with more than one hardware vendor, underlying chip technology, or operating system. Furthermore, software components associated with a particular solution or system may be initially written in one or more programming languages, but may invoke software components written in another programming language. Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in the flow diagrams to be performed. These computer program instructions may also be stored in a computer-readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in the flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process. Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information and which can be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer-readable communication media (CRCM) may include computer-readable instructions, program module(s), or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM. Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. | 53,194 |
11858140 | DESCRIPTION OF EMBODIMENTS Next, an embodiment of the present invention will be described with reference to the drawings. First, the robot system1of the first embodiment will be described with reference to FIG.FIG.1is a block diagram showing a configuration of the robot system1. The robot system1is a system for causing the robot10to perform work. There are various works to be performed by the robot10, and examples thereof include assembly, processing, painting, and cleaning. The robot10is controlled using a model constructed by machine learning the data described later. Therefore, the robot system1basically does not require the assistance of an operator and can perform the work autonomously. The autonomous operation of the robot10in this way is sometimes referred to as “autonomous operation”. In the robot system1of the present embodiment, the robot10can be operated according to the operation of the operator. The robot10can not only perform the work autonomously, but also perform the work according to the operation of the operator. As shown inFIG.1, the robot system1includes a robot10, an operation unit20, a switching device30, and a control unit40. Each device is connected to each other via a wired or wireless network. The robot10includes an arm portion attached to a pedestal. The arm portion has a plurality of joints, and each joint is provided with an actuator. The robot10operates the arm portion by operating the actuator in response to an operation command input from the outside. The operation command includes a linear velocity command and an angular velocity command. An end effector according to the work content is attached to the tip of the arm portion. The robot10performs work by operating an end effector in response to an operation command input from the outside. The robot10is equipped with a sensor for detecting the operation of the robot10and the surrounding environment. In this embodiment, the motion sensor11, the force sensor12, and the camera13are attached to the robot10. The motion sensor11is provided for each joint of the arm portion of the robot10and detects the rotation angle or angular velocity of each joint. The force sensor12detects the force received by the robot10during the operation of the robot10. The force sensor12may be configured to detect the force applied to the end effector, or may be configured to detect the force applied to each joint of the arm portion. The force sensor12may be configured to detect a moment in place of or in addition to the force. The camera13detects an image of the work to be worked on (progress of work on the work). In place of or in addition to the camera13, a sound sensor for detecting sound and/or a vibration sensor for detecting vibration may be provided, and the progress of work on the work can be detected based on the detection results of these sensors. The data detected by the motion sensor11is motion data indicating the motion of the robot10, and the data detected by the force sensor12and the camera13is ambient environment data indicating the state of the environment around the robot10. The data detected by the sensor11, the force sensor12, and the camera13are state values indicating the progress of the work (work on the workpiece) of the robot10. In the following description, the motion sensor11, the force sensor12, and the camera13provided in the robot10may be collectively referred to as “state detection sensors11to13”. Further, the data detected by the state detection sensors11to13may be particularly referred to as “sensor information”. The state detection sensors11to13may be provided around the robot10instead of being attached to the robot10. The operation unit20includes an operation device21, a display device22, and an input unit23. The operation device21is a member operated by an operator to operate the robot10. The operation device21is different depending on the work content, but is, for example, a lever operated by the operator or a pedal operated by the foot. The operation device21includes a known operation force detection sensor (not shown). The operation force detection sensor (force detection sensor) detects the operation force, which is the force applied by the operator to the operation device21. When the operation device21is configured to be movable in various directions, the operation force may be a value including the direction and magnitude of the force, for example a vector. The operation force is not only the force (N) applied by the operator, but also the acceleration (that is, the value obtained by dividing the force applied by the operator by the mass of the operation device21), which may be a value linked to the force. In the following description, the operation force applied by the operator to the operation device21may be particularly referred to as “operator operation force”. The operator operation force output by the operator operating the operation unit20(operation device21) is converted into an operation command by the switching device30as described later. The display device22is a dot matrix type display such as a liquid crystal or an organic EL. The display device22is arranged in the vicinity of the operation device21and displays information on the work performed by the robot system1based on a video signal, for example, a notification signal described later. When the operation device21is arranged at a position away from the robot10, the display device22may display an image in the vicinity of the robot10. The input unit23is a key or the like that receives the input of the work state by the operator at the time of additional learning described later, and outputs the input work state to the control unit40(additional learning unit43). A robot10, an operation unit20, and a control unit40are connected to the switching device30. The operator operation force output by the operation unit20and the calculation operation force output by the control unit40, which will be described later, are input to the switching device30. The switching device30outputs an operation command for operating the robot10to the robot10and the control unit40(communication unit41). The switching device30is composed of, for example, a known computer, and includes an arithmetic unit (CPU, etc.) and a storage unit (for example, ROM, RAM, HDD, etc.). The switching device30can function as various means by reading and executing the program stored in the storage unit by the arithmetic unit. When a name is given for each function, the switching device30includes a switching unit31and a conversion unit32. The switching unit31is configured to output either the operator operation force or the calculation operation force to the conversion unit32from the input operator operation force and calculation operation force. The switching unit31is provided with a connector or an antenna, and outputs either the operator operation force or the calculation operation force to the conversion unit32based on a setting signal which is received from outside of the switching device indicating whether to convert the operator operation force or the calculation operation force. As a result, a state in which the operator operates the robot10(that is, the robot10works based on the operator operation force output by the operation unit20) and a state in which the robot system1autonomously make the robot10work (that is, the robot10works based on the calculation operation force output by the control unit40) can be switched. The switching unit31may be provided with a sensor (not shown), and the switching unit31outputs the operator operation force to the conversion unit32when the switching unit31determines that the operator operates the operation unit20(operation device21), for example the switching device31determines the value of the operator operation force is more than a threshold value. The switching unit31outputs the calculation operation force to the conversion unit32when the switching unit31determines that the operator does not operate the operation unit20, for example the switching device31determines the value of the operator operation force is less than a threshold value. As a result, the switching unit31can be in a state in which the operator operates the robot10while the operator is operating the operation unit20without being based on the setting signal. The conversion unit32converts either the operator operation force or the calculation operation force input from the switching unit31into an operation command for operating the robot10, and the conversion unit32outputs the operation command to the robot10and the control unit40(communication unit41). The control unit40is composed of a known computer, and includes an arithmetic unit (CPU, etc.) and a storage unit (for example, ROM, RAM, HDD, etc.). The control unit40can function as various means by reading and executing the program stored in the storage unit by the arithmetic unit. When named for each function, the control unit40includes a communication unit41, a learning control unit42, an additional learning unit43, a determination unit44, a notification unit45, and a timekeeping unit46. The communication unit41includes a connector or an antenna, and is configured to output an input from the outside of the control unit40to each unit42to46in the control unit40. The communication unit41is configured to output the output from each unit42to46in the control unit40to the outside of the control unit40. For example, the input from the outside of the control unit40received by the communication unit41includes the operator operation force output by the operation unit20(operation device21), the work state output by the operation unit20(input unit23), the operation commands output by the switching device30(conversion unit32), and the sensor information output by the state detection sensors11to13. The output which the communication unit41outputs to the outside of the control unit40is for example, the calculation operation force output to the switching device30described later and the notification signal output to the operation unit20(display device22) described later are used. Hereinafter, the input/output of each unit42to46in the control unit40to the outside of the control unit40may not be particularly referred to via the communication unit41. The timekeeping unit46has a well-known timekeeping function using an internal clock of the arithmetic unit or the like. The timekeeping function includes, for example, a timer function that starts output of a timer signal at a predetermined time interval (for example, every second) from the time when the trigger signal is input, based on a trigger signal from the outside of the timekeeping unit46. The timekeeping unit46may be a timer that starts outputting a timer signal at the time interval from the time when the trigger signal is input, based on a trigger signal from the outside of the timekeeping unit46and a signal indicating a time interval (for example, 1 second). Further, the trigger signal from the outside of the timekeeping unit46includes a first trigger signal that causes the timekeeping unit46to start outputting the timer signal and a second trigger signal that causes the timekeeping unit46to end the output of the timer signal. The learning control unit42causes the robot10to perform work by outputting an operation force to the robot10via the switching device30using the model constructed by machine learning. Hereinafter, the operation force output by the control unit40(learning control unit42) may be referred to as “calculation operation force”. Hereinafter, the method of constructing this model will be specifically described. In this embodiment, the output of the calculation operation force is switched every second (that is, the time interval of the timer signal of the timekeeping unit46). As shown inFIG.2, when the robot10is allowed to perform the work of inserting the workpiece100into the recess110, it can be classified into four work states, for example, in the air, contact, insertion, and completion. The work state 1 (in the air) is a state in which the robot10holds the workpiece100and positions it above the recess110. The work state 2 (contact) is a state in which the workpiece100held by the robot10is in contact with the surface on which the recess110is formed. The work state 3 (insertion) is a state in which the workpiece100held by the robot10is inserted into the recess110. The work state 4 (completion) is a state in which the workpiece100held by the robot10is completely inserted into the recess110. In this way, the four work states are a series of work by the robot10classified for each process, and when the work of the robot10proceeds correctly, the work state 1 (in the air), the work state 2 (contact), and the work state. The work state changes in the order of state 3 (insertion) and work state 4 (completion). In addition, there is a work state 5 (twist) as another work state. The work state 5 (twist) is not registered as a work state at the stage of first machine learning. In the work state 5 (twist), the workpiece100is inserted into the recess110, but the insertion cannot be further advanced. In the work state 5 (twist), the work cannot be continued unless the work state change into work state 1 (in the air), that is, it is necessary to separate the workpiece100from the recess110. Next, the data that the learning control unit42performs machine learning will be described. The learning control unit42performs the machine learning of the current work state, the next work state associated with the current work state (that is, the work state to be transitioned to next), and at least one set of state value and the operation force associated with the state value. The state value is a value indicating the progress of the work of the robot10, and is a value that changes according to the progress of the work. The state value includes sensor information (for example, work status such as position, speed, force, moment, image, etc.) detected by the state detection sensors11to13. The state value may include information calculated based on the sensor information (for example, a value indicating a change over time of the sensor information from the past to the present). FIG.3is a diagram showing an example of data that the learning control unit42performs machine learning.FIG.4is a diagram conceptually showing an example of the correspondence between the state value and the work state in the model. As shown inFIGS.3and4, the current work state is work state 2 (contact) and the current state value is S210. Then, the learning control unit42performs the machine learning of the operation of the robot10for n seconds (n is an integer of 1 or more) so that the work state is changed from the work state 2 (contact) into the work state 3 (insertion), in other words, the state value is changed from S210into S310. The learning control unit42performs the machine learning of the data shown inFIG.3and constructs a model. Specifically, the learning control unit42performs the machine learning of the current work state 2 (contact), the next work state 3 (insertion), the current state value S210, the operation force I210, the state value after m seconds S21m(m is integer from 1 to n−1), the operation force after m seconds I21n, Inullindicating a dummy operation force. As shown inFIG.3, the current work state 2 (contact) is different from the next work state 3 (insertion), and there are (n+1) sets of state value and operation force, that is, a plurality of sets. The learning control unit42may perform the machine learning the time (for example, after 0 to n seconds) with the state value at the time (for example, S210to S21n) and the operation force (for example, I210to Inull) at the time. The learning control unit42may perform the machine learning in which a order of the machine learning is equal to the order of the output and omit the machine learning of the time. The learning control unit42may be configured to perform the machine learning of values except the state value after n seconds S21nand the Inullindicating the dummy operation force. The operation force to be learned may be an operator operation force output by the operation unit20(operation device21), or may be prepared in advance as data. There are various operations of the robot10for changing the work state 2 (contact) into the work state 3 (insertion). For example, the operation of the robot10in which the current state value is S220indicating the work state 2 (contact) and the state value S310indicating the work state 3 (insertion) is also included. In this embodiment, the learning control unit42also perform the machine learning of such a operation of the robot10and constructs the model. Since the method of machine learning is the same as the method described above with reference toFIG.3, detailed description thereof will be omitted. The learning control unit42also constructs the model by performing the machine learning of the operation of the robot10for changing the work state 2 (contact) into the work state 3 (insertion) and the operation for changing the work state 3 (insertion) into the work state 4 (completion). Since the method of machine learning is the same as the method described above with reference toFIG.3, detailed description thereof will be omitted. When the current work state is the work state 4 (completion), the robot10does not need to operate. In this case, the learning control unit42may perform the machine learning of the current work state 4 (completed), the next work state 4 (completed), the state value after 0 second (that is, the current) S4, and the dummy operation force Inull. The learning control unit42performs, based on the constructed model, a work state estimation process for estimating the current work state from the current state, a next work state estimation process for estimating the next work state from the current state value and the estimated current work state, and an output process for determining the operation force (the calculation operation force) to be output based on the current state value, the estimated current work state, and the next work state and outputting the calculation operation force to the switching device30. As a result, the learning control unit42can control the operation of the robot10so as to properly perform the work. First, the estimation of the current work state (work state estimation process) will be described. As described above, the learning control unit42perform the machine learning of the state value and the work state (and the next work state), and estimates the current work state (and the next work state) based on the current state value. As shown inFIG.4, there are three state values and state values (corresponding to points inFIG.4) which are learned are distributed in the model. When the state values are inside the areas (spaces) which are corresponding to each work states shown inFIG.4, there is high probability that the state values indicates the work state corresponding to the specific work state corresponding to the areas. For example, the area of “work state 2 (contact)” indicates a set (cluster) of state values determined to be work state 2 (contact) among the state values for the machine learning. The set (cluster) is formed by determining the center point of the area of “work state 2 (contact)”. The center point of the work state 2 (contact) is determined so that the distance from the center point of the work state 2 (contact) to the coordinates of the point indicating the state value determined to be work state 2 (contact) is equal or less than the first distance. The center point of the work state 2 (contact) is also determined so that the distance from the center point of other work states to the coordinates of the point indicating the state value determined to be work state 2 (contact) is equal or more than the second distance which is larger than the first distance. Therefore, as shown inFIG.4, when the current state values are S210, S310, the learning control unit42estimates that the current work state is the work state 2 (contact), the work state 3 (insertion), respectively. Next, the process of estimating the next work state (the next work state estimation process) will be described. As described above, the learning control unit42performs the machine learning of the state value, the work state, and the next work state, and estimates the next work state based on the current state value and the estimated current work state. For example, as shown inFIG.4, the current state value is S210, and the current work state is estimated to be work state 2 (contact). As shown inFIG.3, when the current state value during the machine learning is S210and the current work state is work state 2 (contact). The next work state is work state 3 (insertion) (that is, the work state is changed from the work state 2 (contact) to the work state 3 (insertion)). The learning control unit42performs the machine learning of the above operation of the robot10. In this case, the learning control unit42estimates that the next work state is the work state 3 (insertion). Next, a process (output process) for determining and outputting the calculation operation force will be described. As described above, the learning control unit42performs the machine learning of the work state, the next work state, the state value, and the operation force, and determines the calculation operation force to the switching device30based on the current work state, the next work state, and the current state value. For example, as shown in theFIG.4, the current state value is S210, the current work state is estimated to be work state 2 (contact), and the next work state is estimated to be work state 3 (insertion).FIG.4shows an arrow extending from the state value S210to the state value S310. The arrow corresponds to the operation for n seconds of the robot10for change the work state from the work state 2 (contact) shown inFIG.3to the work state 3 (insertion), the learning control unit42of the robot10performing the machine learning. In this case, the learning control unit42outputs a trigger signal to the timekeeping unit46when the first operation force I210as an calculation operation force shown inFIG.3is output to the switching device30. The timekeeping unit46outputs a timer signal every second from the time when the trigger signal is input based on the trigger signal. Next, the learning control unit42outputs the operation forces I210to I21 (n-1)as a calculation operation force shown inFIG.3to the switching device30every second based on the timer signal from the timekeeping unit46. When the learning control unit42detects that the operation force shown inFIG.3is Inullindicating a dummy operation force, the learning control unit42stops the output of the calculation operation force. As described above, the learning control unit42determines the calculation operation force for operating the robot10from the current state value based on the model constructed by the machine learning. As a result, the learning control unit42can operate the robot10according to the current work state and by using a more appropriate calculation operation force according to the next work state. Further, even if there are variations in the shape of the workpiece100, variations in the holding position of the workpiece100, variations in the positions of the recesses110, etc., the learning control unit42repeats the above-mentioned machine learning so that the robot10can flexibly deal with these variations. The additional learning unit43, the determination unit44, and the notification unit45have a function for performing additional learning to be performed when the above machine learning cannot deal with the situation. Hereinafter, this additional learning will be described with reference toFIGS.5to7.FIG.5is a flowchart showing a process performed by the robot system regarding additional learning.FIGS.6and7are diagrams conceptually showing the contents of the additional learning according to the determination result of the work state in the model. In the present embodiment, when the robot system1cannot perform the work autonomously, the operator operates the robot10to assist the work, and the model is additionally learned by the operation contents of the assisted operator. Hereinafter, a specific description will be given. In the present embodiment, the operation content of the operator is additionally learned every second (that is, the time interval of the timer signal of the timekeeping unit46). First, the learning control unit42operates the robot10, and the autonomous work by the robot system1is started (S101). Before the start of the work, the learning control unit42outputs a setting signal to the switching device30indicating that the calculation operation force is to be converted. The switching device30converts the calculation operation force output from the learning control unit42to the operation command and output the operation command to the robot10. While the learning control unit42is controlling the robot10(that is, only the calculation operation force is output to the switching device30), the learning control unit42determines that the current work state corresponds to the work state 4 (completion) or not based on the current state value (S102, work state estimation process). When the current work state corresponds to the work state 4 (completed), the learning control unit42determines that the work has been completed. Then, the learning control unit42outputs the calculation operation force to the switching device30for moving the arm portion of the robot10to the start position of the next work (for example, the place where the next workpiece100is placed). The switching device30converts the calculation operation force to the operation command and outputs the operation command to the robot10(S112). When the current work state is not work state 4 (completed) (that is, when the work is not completed), the determination unit44determines whether or not the work can be continued under the control of the learning control unit42based on the current state value. The determination unit44outputs a determination result indicating whether or not to continue (S103, determination step). In other words, the determination unit44determines whether or not the work can be continued without the assistance of the operator based on the current state value. This determination is made based on, for example, the current state value (for example, sensor information), preset conditions, or the like. Specifically, the conditions are set such that the force detected by the force sensor12suddenly increases, the force detected by the force sensor12exceeds the reference value, or the like. Further, the determination unit44may make an autonomous determination (in other words, create a determination reference or a condition by itself) instead of the preset conditions. Specifically, the determination unit44receives the output of a similarity degree of the work state described later from the learning control unit42, and the current state value does not belong to any work state based on the similarity degree (for example, When it is determined that the similarity degree is lower than a predetermined threshold value in any work state), it is determined that the work cannot be continued. When the determination unit44outputs the determination result indicating that the work of the robot10can be continued under the control of the learning control unit42, the learning control unit42subsequently outputs the calculation operation force to the switching device30and operate the robot10. On the other hand, when the determination unit44outputs the determination result indicating that the work of the robot10cannot be continued under the control of the learning control unit42, a process for requesting the assistance of the operator and performing additional learning is performed. In order to perform the additional learning, the current correct work state and the next work state, the state value, and the operation force for resolving the state in which the work cannot be continued are required. Specifically, the notification unit45notifies that the work cannot be continued based on the determination result indicating that the work of the robot10cannot be continued. Specifically, the notification unit45outputs a first notification signal indicating that the work cannot be continued to the display device22, and the learning control unit42provides the similarity degree as an information for the operator to identify the current correct work state. The notification unit45outputs the second notification signal for displaying the similarity degree to the display device22(S104, notification step). The similarity degree is a value indicating the degree to which the current state value is similar to the (registered) work state in the model. The similarity degree is calculated by comparing the current state value with the distribution of the state values belonging to each work state in the model (that is, learned state values belonging to each work state). To explain by way of example, in a situation where areas of work states 1 to 4 exist as shown in the upper graph ofFIG.6, the current state values S5, S6are outside from these areas. The learning control unit42calculates the similarity degree based on the distance between the coordinates indicating the current state values S5, S6and the center points of the areas of the work states 1 to 4 (or the work states 1 to 4). The similarity degree increases as the distance becomes shorter. The learning control unit42may calculate the similarity degree for each state value, or may calculate one similarity degree in consideration of the comparison results of all the state values. The learning control unit42may calculate and output the similarity degree to all the registered work states, or may output only the similarity to one work state having the highest similarity. The similarity degree is displayed on the display device22as text data, but may be displayed on the display device22using a figure such as a graph, for example. Next, the control unit40performs a process for receiving the input of the work state specified by the operator (S105, input receiving process). For example, the control unit40(notification unit45) transmits a third notification signal to the display device22so that the display device22displays an input field for the operator to input the correct work state using the input unit23. As a result, it is possible to prompt the operator to identify the work state and input the work state. Before or after the process of step S105, the control unit40outputs the setting signal to the switching device30indicating that the operator operation force is to be converted. The switching device30changes the setting so that the operation command which is converted from the operator operation force output by the operation unit20(operation device21) is output. The setting of the switching device30may be changed when the display device22displays the input field or before that (for example, when the determination unit44outputs a determination result indicating that the work of the robot10cannot be continued). preferable. As a result, the operator can be able to input after confirming the display of the display device22, and the operation command based on the operator operation force can be reliably output to the robot10. The operator confirms the similarity degree displayed on the display device22and identifies the correct work state by visually recognizing the positional relationship between the robot10, the workpiece100, and the recess110directly or through a camera. The operator may specify the correct work state by operating the operation device21to operate the robot10or by directly touching the robot10by hand. As described above, the operator identifies the correct work state (for example, the work state 3 which is the work state in the model) and inputs by using the input unit23of the operation unit20. If none of the work states in the model is applicable, the operator creates a new work state (for example, a work state 5 that is not the work state in the model) and inputs it by the input unit23of the operation unit20. When the control unit40(additional learning unit43) determines that the work state in the model has been input by using the input unit23(S106). The control unit40acquires the current state value and corrects the state estimation standard in the model based on the state value (S107, work state estimation standard correction step). To explain with an example, as shown in the upper graph ofFIG.6, in a situation where areas of the work states 1 to 4 exist in the model, the current state values S5outside these area is determined to corresponds to the work state 3 (insertion). In this case, as shown in the lower graph ofFIG.6, the additional learning unit43modify the work state 3 (injection) so that the coordinates indicating the current state value S5are located within the area of the work state 3 (insertion). For example, the additional learning unit43is in the area of the work state 3 (insertion) in the model so that the coordinates of the point indicating the current state value S5or the coordinates close to it can be easily determined as the work state 3 (insertion). Correct the center point and/or the first distance. On the other hand, when the control unit40(additional learning unit43) determines that a new work state different from the work state in the model has been input to the input unit23(S106). The control unit40acquires the current state value and registers a new work state in the model based on the state value (S108, work state registration step). To explain with an example, as shown in the upper graph ofFIG.7, in a situation where areas of work states 1 to 4 exist in the model, the current state values S6outside these areas. It is determined that the operator has input to the input unit23that it corresponds to the work state 5 (twist) which is a work state different from the existing work states 1 to 4. In this case, as shown in the lower graph ofFIG.7, the additional learning unit43is set to add the work state 5 (twist), which is a new work state, to the model. At this stage, since there is only one coordinate point associated with the work state 5 (twist), the additional learning unit43add the area in which the predetermined distance, which corresponds to the first distance as other work states, from the (center) point indicating the current state value S6to the area of the work state 5 (twist) in the model. Next, the operator operates the operation unit20(operation device21). The operation unit20outputs the operator operation force to the switching device30, and the switching device30converts the operator operation force into an operation command and outputs the output to operate the robot10. For example, when the operator inputs to the input unit23that the current work state is the work state 3 (insertion), the operator operates the operation device21to operate the robot10to insert the workpiece100and transitions the work state into the work state 4 (completed), that is, to complete the work. When the operator inputs to the input unit23that the current work state is the new work state 5 (twist), the operator operates the robot10to operate the work by operating the operation device21and moves upward to be separated from the recess110, and the work state 5 (twist) is changed to the work state 1 (in the air). At this time, the operation unit20outputs the operator operation force in which the operator operates the robot10so as to change the work state to the control unit40(additional learning unit43), and the additional learning unit43acquire the operator operation force and state values (S109). For example, when the additional learning unit43detects that the operator operation force is input from the operation unit20, the additional learning unit43outputs a trigger signal to the timekeeping unit46. Based on the trigger signal, the timekeeping unit46outputs a timer signal at a predetermined time interval (1 second in the present embodiment) from the time when the trigger signal is input. Next, the additional learning unit43acquires the current state value (for example, sensor information is acquired from the state detection sensors11to13), and acquires the operator operation force from the operation unit20. The additional learning unit43stores an index having a numerical value of 0, the state value, and the operation force (that is, the operator operation force) in association with each other. The additional learning unit43acquires the state value and the operator operation force every second based on the timer signal every second from the timekeeping unit46. The additional learning unit43increments the index by 1 stores the index, the state value, and the operation force (operator operation force) until the completion of the operation of the robot10by the operation of the operator. The additional learning unit43determines whether the operation of the robot1by the operation of the operator and determines the work state of completion of the work (that is, the work state after transitioning the state based on the acquired state value (S110, state transition completion determination step). For example, the determination that the operation of the robot10is completed is determined by the additional learning unit43, based on the index, the state value, and the operation force stored in association with each other, for a certain period of time or more since the state value does not change. What has been done (that is, the same state value is continuously stored a certain number of times or more), or a certain number of times have passed since the output of the operator operation force disappeared (that is, there is no operation force a certain number of times). It may be performed by detecting that the above is continuously stored). At this time, the additional learning unit43may determine that the time when the operation of the robot10by the operation of the operator is completed is the first time when the state value does not change (for example, the same state value is continuously stored a certain number of times or more). The additional learning unit43may determine that the time when the operation of the robot10by the operation of the operator is completed is the first time when the output of the operator operation force disappears (for example, the youngest index when the lack of operation force is continuously stored for a certain number of times or more). Preferably, the additional learning unit43replaces the operation force associated with the time of the completion of the operation of the robot10by the operation of the operator (that is, the youngest index) with Inullindicating the dummy operation force and overwrites the dummy operation force. Regarding the determination of the work state of the completion of the operation, for example, the additional learning unit43calculates the state value associated with the time of completion of the operation of the robot10by the operation of the operator based on the index, the state value, and the operation force. The control unit40(additional learning unit43) performs the process for estimating the work state (work state estimation process). The additional learning unit43additionally learns the acquired operator operation force, the state value, and the work state before and after the state transition (S111, additional learning step). The work state before the state transition is a work state in which the operator inputs to the input unit23and outputs to the additional learning unit43in steps S105to S107. For example, in step S106, when the state value is S5, the current work state (work state before the transition) is input as the work state 3 (insertion). In step S109, the operator continues to insert the workpiece100and completes the work, then in the step S110, the additional learning unit43identifies that the work state has transitioned from the work state 3 (insertion) to the work state 4 (completion) and determines that the state value after the state transition is calculated to be S4. The additional learning unit43creates the data for additional learning shown inFIG.8, which is corresponding to the operation of the robot10for p seconds (p is an integer of 1 or more) and additionally learns it and updates the model. Since the method of additional learning is the same as the method of machine learning described above with reference toFIG.3, detailed description thereof will be omitted. By performing this additional learning, the learning control unit42acquires a new method for advancing the insertion of the workpiece100. As a result, even if the same kind of situation occurs from the next time onward, the work can be continued without the assistance of the operator. Further, when the state value is S6, the current work state (work state before the transition) is input as a new work state 5 (twist) in step S106. Then, the operator moves the workpiece100so that the workpiece100separates from the the recess110in step S109. Then, the additional learning unit43identifies that the work state has changed from the work state 5 (twist) to the work state 1 (in the air) and the state value after state transition is calculated to be the state value S1in step S110. Then, the additional learning unit43creates the data for additional learning shown inFIG.9, which is corresponding to the operation of the robot10for q seconds (q is an integer of 1 or more) and additionally learns it and updates the model. Since the method of additional learning is the same as the method of machine learning described above with reference toFIG.3, detailed description thereof will be omitted. By performing this additional learning, the learning control unit42acquires a method for resolving the twist when the twist occurs. As a result, even if the same kind of situation occurs from the next time onward, the work can be continued without the assistance of the operator. When the additional learning unit43completes the additional learning (S111), the learning control unit42operates the robot10and restarts the autonomous work by the robot system1(S101). Here, before returning to the process of step S101, the control unit40outputs a setting signal indicating that the calculation operation force is to be converted to the switching device30, so that the switching device30outputs the calculation output by the learning control unit42. The setting of the switching device30is changed so that the operation command obtained by converting the operation force is output to the robot10. As described above, by detecting the state in which the robot system1cannot be resolved autonomously and performing additional learning associated with the work state, additional learning can be efficiently performed, so that the robot system1is stopped too much. It is possible to continue the work without causing it. This embodiment can be modified in various ways. In the present embodiment, the robot system1additionally learns and outputs the operation of the robot10in second order, which is the time interval of the timer signal, but the timer signal may have a shorter time interval (for example, 0.1 second or less). The robot system1may be configured so that the operation of the robot10can be additionally learned and output even if the time interval is such a short time. As a result, the operation of the robot10can be additionally learned with higher accuracy, and the robot10can be operated with higher accuracy. In the present embodiment, the timekeeping unit46outputs the timer signal every second from the time when the trigger signal is received based on the trigger signal, and the control unit40additionally learns the operation of the robot10based on the timer signal or operates the robot10. When the robot system1is configured so that the operation of the robot10can be additionally learned and output even if the timer signal has a shorter time interval (for example, a time interval of 0.1 second or less), the timekeeping unit46may be configured to always output a timer signal at this short time interval, not based on the trigger signal. As a result, the configuration of the timekeeping unit46can be simplified without additionally learning the operation of the robot10and lowering the accuracy of the operation of the robot10. Specifically, if the timekeeping unit46is configured to constantly output the timer signal at a predetermined time interval, a delay of the time interval occurs at the maximum in the additional learning and output of the operation of the robot10based on the timer signal. When the timekeeping unit46outputs a timer signal at 1 second intervals as in the present embodiment, a delay of 1 second at the maximum occurs, so that the influence of the delay cannot be ignored. On the other hand, when the output time interval of the timer signal is a short time interval such as 0.1 second or less (that is, when the robot system1can process the additional learning and output of the operation of the robot10in substantially real time), the effect of the above delay is minor. In the present embodiment, the number of work states is at most 5, but the number of work states may be increased. This makes it possible to determine a more appropriate work state corresponding to the current state value. In the present embodiment, in step S105, the input unit23receives the input of the current work state by the operator and outputs it to the additional learning unit43, but the input unit23may receive and output the input of the next work state. For example, a key (not shown) may be provided so that the additional learning unit43may receive the next work state. As a result, in step S110, the identification of the work state after the transition performed by the additional learning unit43can be omitted. In the present embodiment, the work state transitions from the work state 2 (contact) to the work state 3 (insertion), but the work state is not limited to this, and the work state may be changed from the work state 2 (contact) to the work state 1 (in the air). This makes it possible to determine a more appropriate transition of the work state. For example, as shown inFIG.4, there is a case where the current work state is work state 2 (contact) and the current state value is S230. The area of work state 3 (insertion) is far from the current state value S230, and the area of work state 1 (in the air) is close to it. In such a case, the operation of the robot10that transitions the work state from the current state value S230to the work state 3 (insertion) may be learned, but the state value is changed to S1by moving the workpiece100upward. It is better to operate the robot10so that the workpiece100is further moved and the state value is changed to S210after changing to (that is, transitioning to the work state 1 (in the air)). Which operation is more appropriate may be evaluated by, for example, the time until the operation of the robot10is completed (that is, until the work state 4 (completion) is reached). In the present embodiment, the switching device30includes a switching unit31and a conversion unit32, but the configuration of the switching device30is not limited to this. For example, the switching device30may be provided with a regulatory unit which controls so that outputting the operation command of the conversion unit32is stopped based on the force received by the robot10detected by the force sensor12and the operation force input to the conversion unit32. As a result, the operation of the robot10can be regulated when an unexpected situation occurs. For example, when the regulatory unit determines that the detection value of the force sensor12is equal to or higher than the threshold value and the operator operation force or the calculation operation force is equal to or higher than the threshold value and the detection value of the force sensor12continues to increase in the same direction, the regulatory unit controls to stop the output of the operation command from the conversion unit32. In the present embodiment, before and after the process of step S105(for example, when the determination unit44outputs a determination result indicating that the work of the robot10cannot be continued), the control unit40uses the switching device30to operate the operator. Then, in step S110and S111, when the additional learning is determined to be completed, the control unit40modifies the setting so that the switching device30outputs the operation command. The control unit40(learning control unit42) may interrupt the output of the calculation operation force instead of changing the setting so that the switching device30outputs the operation command converted by the operator operation force, and the control unit40may restart the output of the calculation operation force instead of changing the setting so that the switching device30outputs the operation command converted by the calculation operation force. As a result, when the work of the robot10cannot be continued under the control of the learning control unit42, it is possible to suppress the risk of unnecessary operation of the robot10due to the calculation operation force. Next, the second embodiment will be described with reference toFIGS.10to15. In the description of the second embodiment, the same or similar members as those in the first embodiment may be designated by the same reference numerals in the drawings, and the description may be omitted. In the second embodiment, the workpiece performed by the robot10is classified into a plurality of operations as shown inFIG.10. Specifically, in the operation A, the workpiece is positioned above the member while the robot10holds the work, and the work is brought close to the surface of the member. In operation B, the workpiece is moved as it is, and the work is brought into contact with the surface of the member. In operation C, the workpiece is moved toward the position of the opening. When the workpiece is moved, the workpiece is maintained in contact with the surface of the member. In operation D, the end of the workpiece is brought into contact with the inner wall of the opening. In operation E, the workpiece is inserted into the opening. Here, the “work state” described in the first embodiment and the “operation” in the second embodiment are similar concepts. For example, in the second embodiment, it is possible to regard the period during which the operation A is being performed as the work state A and the period during which the operation B is being performed as the work state B (the same applies to the operations C and D). Next, the robot system1of the second embodiment will be described with reference toFIG.11. The second embodiment is different from the first embodiment in that progress degree and certainty degree are acquired and used. As described in the first embodiment, the control unit40can function as various means by reading and executing the program stored in the storage unit by the arithmetic unit. The control unit40of the second embodiment further includes a progress acquisition unit51, a certainty acquisition unit52, a progress monitoring unit56, and a certainty monitoring unit57. The progress acquisition unit51acquires the progress degree. The progress degree is a parameter used to evaluate which degree of progress the movement performed by the robot10based on the output of the model constructed by the above-mentioned machine learning (including additional learning) corresponds to in a series of operations. In the present embodiment, the progress degree takes a value in the range of 0 to 100, and the closer it is to 100, the more a series of work is progressing. The calculation of the progress degree will be described with reference toFIG.12. In the present embodiment, as shown inFIG.12, the progress degree is calculated in consideration of the cluster obtained by clustering the states of the robot10that can be acquired in chronological order (time series) and the operation history of the robot10. The state of the robot10described above can be expressed as a multidimensional vector (feature vector) including the sensor information from the state detection sensors11to13and the calculation operation force of the model. The feature vector changes variously in the process of the robot10performing a series of operations. The feature vector may include not only the value of the sensor information and the calculation operation force at the present time, but also the past history of the sensor information and the calculation operation force. In the following description, the sum of the state of the robot10, the state of its surroundings, and the result estimated by the model accordingly may be referred to as a phase of the robot10. As the feature vector described above, data (phase data) representing the phase of the robot10is used. The phase data corresponds to a combination of both the input data (specifically, sensor information) input to the model and the output data (specifically, calculation operation force) output from the model. Clustering is a type of unsupervised learning, and is a method of learning the law of distribution from a large number of data to acquire a plurality of clusters, which are a group of data having similar characteristics to each other. As a clustering method, a known non-hierarchical clustering method can be appropriately used. The aspect of the robot10is characterized for each of the above-mentioned operations (operations A to E). For example, the characteristics of the state in the operation A (that is, the phase data acquired in the operation A) are different from the characteristics of the state in the operation B. Therefore, by performing appropriate clustering on the above-mentioned feature vectors, the phases of the robot10can be classified for each operation. The learning control unit42calculates the progress degree corresponding to the current aspect of the robot10by using the above clustering result. As shown inFIG.12, the value of the progress degree is predetermined so as to gradually and cumulatively increase according to the order of operations indicated by each cluster. Since the series of operations of the robot10can be expressed as arranging the feature vectors in chronological order, the chronological order of each cluster can be obtained by using the information in this chronological order. The learning control unit42calculates which cluster the feature vector indicating the current aspect of the robot10belongs to, and the learning control unit42outputs the progress degree belonged to the cluster by request from the progress acquisition unit51or the certainty acquisition unit52. In order to specify which cluster the feature vector belongs to, for example, the distance between the center of gravity position of each cluster and the feature vector may be obtained, and the cluster having the center of gravity with the shortest distance may be obtained. As shown inFIG.13, when the work of the robot10is progressing (that is, when the phase of the robot10is appropriately transitioned), the value of the progress degree increases with the passage of time. However, when the work of the robot10does not proceed (for example, when the transition to a specific phase is repeated), the value of the progress degree does not increase over time. Therefore, the user can easily grasp whether or not the autonomous work by the robot10is progressing by observing the change in the progress degree. As a result, the stagnation of the operation of the robot10can be easily found, and appropriate measures such as correction of the operation can be taken. The certainty acquisition unit52acquires the certainty degree. The certainty degree is a parameter used to evaluate whether the operation of the robot10is certainty (in other words, whether the output estimated by the model is certainty). The model of the learning control unit42learns in advance the correspondence between the state of the robot10and its surroundings and the operator operation force by the user's operation performed at that time. In other words, the model operates on the rules obtained from a number of known states. Due to the generalization ability inherent in machine learning models, it is expected that the model will output appropriate calculation operation force even in unknown situations. However, just as it is difficult for humans to act with certainty when they are placed in a completely new situation that is difficult to predict from past experience, from the standpoint of a model, the known that they have learned so far. It can be said that the farther the state is from the above state, the less confident the estimation result is. In this sense, certainty degree indicates certainty of estimation. In the present embodiment, the learning control unit42is constructed with a stochastic discriminator for discriminating the aspect of the robot10by machine learning. A plurality of the stochastic discriminators are provided according to the number of clusters classified by the above-mentioned clustering. For example, in the stochastic discriminator of the cluster of operation A, when the feature vector classified into the cluster of operation A by clustering is input, a value close to 100 is output and the cluster of other operations is classified. When a feature vector is input, machine learning is performed so that a value close to 0 is output. Therefore, when a feature vector indicating the current phase of the robot10is input to the stochastic discriminator for which learning has been completed, the stochastic discriminator outputs a value indicating whether or not the phase is likely to be operation A. It can be said that this value substantially indicates the probability (estimated probability) that the current aspect of the robot10is operation A. Learning is performed in the same manner as described above in the stochastic discriminator of other clusters (other operations B to E). By inputting the feature vector to each of the plurality of stochastic discriminators, it is possible to determine which of the operations A to E the current situation corresponds to, and which estimation is certainty based on the stochastic discriminator. In the present embodiment, as shown inFIG.14, the maximum value among the estimated probabilities output by the plurality of stochastic discriminators is used as the certainty degree. If the current aspect is similar to the known aspect of the robot10(in other words, the aspect classified into any of actions A to E by clustering), the value of the certainty degree becomes large. On the other hand, if the current aspect is not similar to the known aspect of the robot10, the value of the certainty degree will be small. As shown inFIG.15, the user can evaluate whether or not the operation of the robot10is likely by looking at the value of the certainty degree during a series of operations, for example. That is, if an operation in which the model does not learned is performed, the value of the certainty degree decreases. Therefore, the user can grasp that the series of operations includes an operation that is insufficiently learned. The control unit40may automatically detect an operation with a low certainty degree. On the other hand, if the operation in which the model has learned, the value of the certainty degree increases. Therefore, the user can also know that the operation of the robot10in a certain aspect matches the learned operation. The user can also use the value of the certainty degree to confirm that the robot10has reached a learned state (for example, any of operations A to E). The progress monitoring unit56monitors the progress degree acquired by the progress acquisition unit51described above. As shown inFIG.13, the progress monitoring unit56can detect a situation in which the progress degree does not change for a predetermined time, and can detect a stagnation in the operation of the robot10. When the progress monitoring unit56detects the stagnation of the operation of the robot10, the control unit40may stop the control of the robot10and stop the work by the robot10. In this case, a time-out function (a function of giving up the continuation of work) based on the monitoring result of the progress monitoring unit56can be realized. In the second embodiment, the determination step (S103) of the first embodiment is performed using this time-out function. Specifically, the determination unit44determines that the work cannot be continued under the control of the learning control unit42when the time in which the progress degree does not increase output by the progress monitoring unit56does not increase for a longer time than the threshold value. The progress degree is also used in the work state estimation process (S102) of the first embodiment to determine whether or not the work has been completed. Specifically, the learning control unit42determines whether or not the current state is the work state corresponding to the operation E and the progress degree is equal to or higher than the threshold value (for example, 100), and when the progress degree is equal to or higher than the threshold value, the work is determined to be completed. The certainty monitoring unit57monitors the certainty degree acquired by the certainty acquisition unit52. The certainty monitoring unit57constantly monitors the value of the certainty degree and detects an operation in which the value of the certainty degree does not reach a predetermined value, as shown inFIG.15. The certainty monitoring unit also detects similar the current work state is with respect to a preset work state. The certainty degree can be used in place of the similarity degree of the first embodiment. Therefore, the learning control unit42can perform the work state estimation process (S102) of the first embodiment by using, for example, the certainty degree output by the certainty monitoring unit57. Specifically, the learning control unit42determines that the work has been completed when the current work state is the work state corresponding to “completion” and the value of the certainty degree is equal to or higher than the threshold value. Further, in the first embodiment, since the similarity degree is also used in the determination step (S103) and the like, the determination step and the like can be performed using the certainty degree. Specifically, the determination unit44determines that the work cannot be continued when the determination unit44determined that the current value of the certainty degree is lower than the threshold value based on the certainty output by the certainty monitoring unit57. This is because if the value of the certainty degree is low, the current work state is likely to be different from the learned work state. Further, the certainty degree can be used as information for the operator to identify the current correct work state as well as the similarity degree of the first embodiment. Specifically, the notification unit45outputs the first notification signal indicating that the work cannot be continued to the display device22, and outputs the second notification signal indicating the certainty degree to the display device22. In this way, by using the progress degree and/or the certainty degree, the degree of progress of the work can be quantified, so that a more accurate determination can be performed. As described above, the robot system1includes a robot10, state detection sensors11to13, a timekeeping unit46, a learning control unit42, a determination unit44, an operation device21, and an input unit23. The switching device30and the additional learning unit43. The robot10performs work based on an operation command. The state detection sensors11to13detect and output a state value indicating the progress state of the work of the robot10. The timekeeping unit46outputs a timer signal at predetermined time intervals. The learning control unit42outputs a calculation operation force based on the state value detected by the state detection sensor11to13and the timer signal by using a model, the model being constructed by machine learning of a work state, a next work state associated with the work state, and at least one set of the state value and the operation force associated with the state value. The determination unit44outputs a determination result indicating whether or not the work of the robot10can be continued under the control of the learning control unit42based on the state values detected by the state detection sensors11to13(determination step). The operation device21is operated by an operator, the operation device being detecting and outputting an operator operation force that is an operation force applied by the operator. The input unit23(key in the figure) receives and outputs the input of the work state by the operator. The switching device30converts either the operator operation force or the calculation operation force into an operation command and outputs the operation command based on the operator operation force and the calculation operation force. The additional learning unit43additionally learns the work state, the next work state associated with the work state, and at least one set of the state value and the operation force associated with the state value and updating the model based on the determination result indicating that the work of the robot10cannot be continued, the work state output by the input unit23, the operator operation force output by the operation device21, the state value detected by the state detection sensor11to13, and the timer signal (additional learning process). As a result, by additionally learning the current and next work states, the operation force, and the state values, even if the robot10cannot continue the work, the robot system1autonomously solves the problem. It becomes possible to continue the work. In the robot system1of the above embodiment, the additional learning unit43calculates the next work state associated with the work state based on the state value and updates the model by additionally learning the work state (state transition completion determination step), the next work state, the state value, and the operator operation force. In the robot system1of the above embodiment, the input unit23receives the input by the operator in the next work state associated with the input work state and outputs it to the additional learning unit43. The additional learning unit43performs the additional machine learning of the work state, the next work state, the state value, and the operator operation force and updates the model. As a result, the work of the robot10can be additionally learned so as to include the transition of the work state, and the work of the robot10can be additionally learned more appropriately. In the robot system1of the above embodiment, the additional learning unit43calculates the next work state associated with the work state based on the state value and updates the model by additionally learning the work state, the next work state, the state value, and the operator operation force. As a result, the operation of the robot10can be additionally learned with high accuracy. In the robot system1of the above embodiment, the switching device30converts either the operator operation force or the calculation operation force into the operation command based on a setting signal for converting either the operator operation force or the calculation operation force and the switching device outputs the operation command. As a result, the state in which the operator operates the robot10and the state in which the robot system1performs autonomous operation can be switched from the outside of the switching device30, particularly the control unit40. In the robot system1of the above embodiment, the switching device30includes a sensor. The sensor detects magnitude of the operator operation force output by the operation device21. The switching device30converts either the operator operation force or the calculation operation force into an operation command and outputs the operation command based on the magnitude of the detected operator operation force. As a result, the switching device30can be in a state in which the operator operates the robot10while the operator is operating the operation unit20. In the robot system1of the above embodiment, the learning control unit42interrupts the output of the calculation operation force based on the determination result indicating that the work of the robot to cannot be continued. The learning control unit42resumes the output of the calculation operation force when the learning control unit42determines that the additional learning is completed. As a result, when the work of the robot10cannot be continued under the control of the learning control unit42, it is possible to suppress the risk of unnecessary operation of the robot10due to the calculation operation force. In the robot system1of the above embodiment, the robot system1includes a notification unit45and a display device22. The notification unit45outputs a notification signal based on the determination result indicating that the work of the robot cannot be continued (notification step). The display device22displays based on the notification signal. As a result, the operator can accurately grasp the timing at which the additional learning of the work of the robot10is required, the information related to the additional learning, and the like. In the robot system1of the above embodiment, the learning control unit42calculates and outputs the similarity degree indicating the degree of similarity of the current state value and the work state in the model based on the state values detected by the state detection sensors11to13. The notification unit45calculates and outputs the similarity and the notification signal (first and second notification signals) based on the determination result indicating that the work of the robot10cannot be continued. As a result, since the display device22displays the notified similarity degree, the operator can accurately identify the current work state. In the robot system1of the above embodiment, the learning control unit42calculates a similarity degree indicating a degree to which the current state value is similar to the specific work state in the model based on the state value detected by the state detection sensor11to13and the learning control unit outputs the similarity degree. The determination unit44outputs the determination result based on the state value and the similarity degree. For example, if it is determined that they are not similar to any of the work states based on the similarity degree, it may be an unknown state and it may be difficult for the robot system1to continue the work. In this way, by using the similarity degree, it is possible to accurately determine whether or not the work can be continued. The robot system1of the above embodiment includes a certainty acquisition unit52for acquiring a certainty degree indicating a degree of certainty of estimation when the model estimates and outputs the calculation operation force according to the input data input to the model. The notification unit45outputs the notification signal based on the certainty degree and the determination result indicating that the work of the robot10cannot be continued. As a result, the operator can accurately identify the current work state based on the certainty degree displayed on the display device22. The robot system1of the above embodiment includes a certainty acquisition unit52for acquiring a certainty degree indicating a degree of certainty of estimation when the model estimates and outputs the calculation operation force according to the input data input to the model. The determination unit44outputs a determination result based on the certainty degree. For example, when the certainty degree is low, it is likely that it is difficult for the robot system1to continue the work because it is in an unknown work state or a state similar to it. In this way, by using the certainty degree, it is possible to accurately determine whether or not the work can be continued. The robot system1of the above embodiment includes a progress acquisition unit51acquiring a progress degree indicating that the work state of the robot realized by the calculation operation force output by the model corresponds to a degree of progress of the work of the robot10. The determination unit44outputs a determination result based on the progress degree. For example, if the progress degree does not change, there is a high possibility that the work by the robot10is stagnant. In this way, by using the progress degree, it is possible to accurately determine whether or not the work can be continued on the robot system1side. In the robot system1of the above embodiment, when the additional learning unit43determines that the work state input to the input unit23is included in the model, the additional learning unit43modifies an estimate standard of the work state in the model based on the state value detected by the state detection sensor11to13. As a result, it is possible to set the model in which the learning control unit42can more accurately estimate the work state. In the robot system1of the above embodiment, when the additional learning unit43determines that the work state input to the input unit23is not included in the model, the additional learning unit43registers the work state input to the input unit23in the model based on the state value detected by the state detection sensor11to13(work state registration process). As a result, even if all work states are not covered at the time of prior machine learning, new work states can be additionally learned. In the robot system1of the above embodiment, the timekeeping unit46outputs the timer signal based on a trigger signal. The timer signal is output at the predetermined time interval from the time when the trigger signal is received. The learning control unit42outputs the trigger signal when starting the output of the calculation operation force. The additional learning unit43outputs the trigger signal when detecting the input of the operator operation force. As a result, it is possible to reduce the influence of the additional learning of the movement of the robot10and the delay caused by the movement of the robot10. While a preferred embodiment of the present invention have been described above, the configurations described above may be modified, for example, as follows. The content of the flowchart ofFIG.5is an example, and processing may be added, processing may be omitted, processing order may be changed, or the like. For example, in a situation where the operator can specify the work state without displaying the similarity degree, the calculation and output of the similarity may be omitted. The data related to the additional learning may be accumulated, and the additional learning may be performed after the data is accumulated to some extent. The data given as the state value is an example, and different data may be used as the state value. For example, when the data related to the direction is used as the state value, the process can be simplified by using the data in the coordinate system common to the robot10and the operator (operation device21and display device22). In the above embodiment, it is assumed that each device constituting the robot system1is arranged at the same work site, but if information can be exchanged via a network, at least one device (for example, an operation device21) may be located in a remote location. Further, at least a part of the functions of the control unit40may be arranged at physically separated positions. The present invention can also be applied to the robot system1that does not have the operation device21. The progress degree and the certainty degree can be arbitrary, and can be, for example, 0 to 1. In the above embodiment, the robot10is attached to the pedestal portion, but it may be configured to be able to travel autonomously. Further, the robot10may be configured to perform work with a member other than the arm portion. REFERENCE SIGNS LIST 1robot system10robot11motion sensor12force sensor13camera21operation device22display device23input unit30switching device40control unit41communication unit42learning control unit43additional Learning unit44determination unit45notification unit46timekeeping unit | 76,958 |
11858141 | DETAILED DESCRIPTION The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Apparently, the following embodiments are only part of the embodiments of the present disclosure, not all of the embodiments of the present disclosure. Generally, the components in the embodiments of the present disclosure that are described and shown in the drawings may be arranged and designed in various different configurations. Therefore, the following detailed descriptions for the embodiments of the present disclosure are not intended to limit the scope of the present disclosure, but merely represents the selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work shall fall within the scope of the present disclosure. Hereinafter, the terms “including”. “having” and their cognates that are used in the embodiments of the present disclosure are only intended to represent specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be understood as first excluding the possibility to have one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing or add one or more features, numbers, steps, operations, elements, components, or combinations of the foregoing. In addition, the terms “first”, “second”, “third”, and the like in the descriptions are only used for distinguishing, and cannot be understood as indicating or implying relative importance. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meanings as commonly understood by those skilled in the art of the embodiments of the present disclosure. The terms (e.g., those defined in commonly used dictionaries) will be interpreted as having the same meaning as the contextual meaning in the related technology and should not be interpreted as having idealized or overly formal meanings, unless clearly defined in the embodiments of the present disclosure. Exemplarily, taking a torque-controlled robotic arm (or torque-controlled robot) as an example, since the robotic arm has n degrees of freedom, and its end-effector requires m degrees of freedom when performing tasks, in order to make the end-effector compliant when interacting with the external environment, impedance control is usually carried out in Cartesian space. However, when the end-effector of the robotic arm performs impedance tasks in Cartesian space, due to the coupling of the end-effector inertial matrix (or inertia matrix), the impedance behavior represented by the end-effector will also have directional coupling, that is, if only subject to the force in the vertical direction, the other two directions that are not subject to the external force will also be moved, and will deviate from the pre-planned trajectory. FIG.1is a schematic diagram of an application of a robotic arm according to an embodiment of the present disclosure. A robotic arm1may include arm(s) (i.e., link (s))11, joint(s)12, an end-effector13, and a controller14. The joint12may be connected between the end-effector13and an arm11and/or between arms11. The controller14controls (a servo of) the joint12to move (e.g., controls the servo of the joint12to rotate), thereby moving the robotic arm1. As shown inFIG.1, as an example, suppose that the end-effector13of the robotic arm1is in contact with an elastic plane P (i.e., x-y plane), the direction of the contact force is z-direction, and draws a circle while keeping the contact force (in Newton, assumed as ION) constant.FIG.2is a schematic diagram of a trajectory obtained through an impedance control method in the prior art. As shown inFIG.2, when the existing impedance control method is adopted, due to the inertial matrix M in robot dynamics is coupled in the x-y-z direction, the circular trajectory of the end-effector13in the x-y plane will be affected by the force in the z direction, and therefore deviate from the desired trajectory. Generally, the ideal case should be that in which direction the external force is subjected to and in which direction the compliant motion will be produced, while the other directions will not be affected. The phenomenon shown inFIG.2is due to the inertial matrix of the end-effector have coupling in all directions.FIG.3is a schematic diagram of tracking end-effector forces obtained through an impedance control method in the prior art. As shown inFIG.3, by tracking the desired force of the end-effector, it can also find that the actual interaction force acting on the end-effector cannot reach the desired interactive force of ION. Based on the above-mentioned problems, an embodiment of the present disclosure provides an impedance control method, which correct the impedance control law through force sensor information and environmental information so as to achieve direction decoupled impedance control. As a result, it not only conducive to improve the control accuracy of the torque-controlled robotic arm, but also conducive to improve the flexibility and safety of the robotic arm, and therefore helps to enhance the application of the robotic arm in human-robot interaction. In addition, considering that the tasks are usually performed by the end-effector, the nonlinear term in the dynamics equation is also compensated in real time, thereby simplifying the control complexity of the torque-controlled robotic arm. Embodiment 1 FIG.4is a flow chart of a first embodiment of an impedance control method according to the present disclosure. In this embodiment, the impedance control method is a computer-implemented method executable for a processor. The method can be applied to torque controls of a robotic arm or robot with redundant degrees of freedom and with degree of freedom at the end-effector. In this embodiment, since the impedance control method is aimed at impedance controls in a Cartesian space, that is, the end-effector of the robotic arm performs tasks in the Cartesian space, the Cartesian spaces also called the task space. The method may be implemented through an impedance control apparatus shown inFIG.9. As shown inFIG.4, the method includes the following steps. S110: obtaining current joint motion information and current joint force information in a joint space of the robotic arm and an actual interaction force acting on an end-effector of the robotic arm, and calculating actual motion information of the end-effector in a task space of the robotic arm based on the joint motion information through forward kinematics. In one embodiment, the actual motion information of the end-effector mainly includes the actual position and actual speed of the end-effector in the current task space. As an example, the joint motion information of each joint of the robotic arm in the joint space at each moment can be obtained first, and then the actual motion information of the end-effector of the robotic arm in the task space can be calculated based on the kinematic relationship between each joint of the robotic arm and the end-effector, that is, using forward kinematics. For example, the joint motion information may include the angular displacement q and the angular velocity 4 of each joint, where q and q are related to the joint and are both n*1 column vectors, where n is the degree of freedom of the robotic arm. As an example, the angular displacement of each joint may be collected through an angular displacement sensor, a position encoder or the like that is disposed at a corresponding position of the joint, and then the corresponding angular velocity may be obtained by differentiating the angular displacement. Alternatively, the angular velocity of each joint may also be measured through a corresponding angular velocity sensor directly. Therefore, the current actual position x and the actual speed {dot over (x)} of the end-effector may be calculated based on a motion relationship between the joint of the robot and the end-effector using equations of: x=f(q); and {dot over (x)}=J{dot over (q)}; where, x and {dot over (x)} are both m*1 column vectors, m is the degree of freedom of the end-effector of the robotic arm, f(q) represents a mapping relationship between the angular displacement q of the joint and the position x of the end-effector which can be calculated through robot forward kinematics, and J is a m*n Jacobian matrix representing a mapping relationship between a joint velocity and an end-effector linear velocity. The above-mentioned actual interaction force acting on the end-effector is the interaction force between the end-effector and the environment, which may be measured through a force sensor such as a six-dimensional force/torque sensor mounted at the end-effector. The above-mentioned joint force information refers to an external torque generated at each joint due to the force of the environment acting on the robotic arm. As an example, the external torque may be collected through a torque sensor corresponding to each joint. S120: calculating a corrected desired trajectory of the end-effector using current environment information and a pre-planned desired end-effector interaction force in the task space, and calculating the impedance control torque of the robotic arm in the joint space based on the joint force information, the actual interaction force, the actual motion information of the end-effector, and desired end-effector information including the corrected desired trajectory. In order to enable the robotic arm to achieve a corresponding task, the robotic arm is usually controlled according to the pre-planned end-effector desired trajectory. In some embodiments, the desired end-effector information of the robotic arm may include, but is not limited to, a desired trajectory, a desired speed and a desired acceleration of the end-effector, a desired interaction force between the end-effector and the environment, and the like. In this embodiment, considering that the existing impedance control methods often have problems as shown inFIG.2andFIG.3, when performing the impedance control, the impedance control law will be corrected to decouple the directions, thereby achieving the precise control of the robotic arm and the like. The above-mentioned step S120mainly includes two parts, namely, adjusting the desired end-effector trajectory, and calculating the corresponding impedance control torque using the adjusted desired trajectory. Generally, the interaction force between the end-effector and the environment should be jointly determined by the stiffness of the robotic arm and the stiffness of the environment. However, in the existing impedance control methods, environmental factors are not considered at all. Therefore, in this embodiment, the influences of environmental factors are considered, and the end-effector desired trajectory are re-expected to obtain the corrected desired trajectory including the environmental factors. It can be understood as replacing the planned desired trajectory with the recalculated desired trajectory to correct the desired trajectory. For example, the above-mentioned environment information may include an environment equivalent stiffness and an environment position. It should be noted that, the environment refers to an external object directly in contact with the end-effector, and the environment equivalent stiffness is related to the stiffness of the environment and that of the robotic arm, which is a function of the stiffness of the environment and that of the robotic arm, for example, if the stiffness of the environmental is Kfand the stiffness of the robotic arm is Kd, the environment equivalent stiffness Keqwill be Keq=KfKdKf+Kd. Therefore, the corrected desired trajectory xrefmof the end-effector may be calculated based on the current environment equivalent stiffness Keq, the current environment position xfand the desired end-effector interactive force Frefthrough an equation of: xrefm=xf-FrefKeq Then, the impedance control torque at the next moment is calculated using the corrected desired trajectory while considering the actual force acting on the end-effector. FIG.5is a flow chart of an example of calculating the impedance control torque in the impedance control method ofFIG.4. As shown inFIG.5, the calculation process of the impedance control torque mainly includes the following steps. S121: calculating a damping control quantity in the task space based on a positional deviation between the corrected desired trajectory and the actual position of the end-effector, a speed deviation between the desired speed and the actual speed of the end-effector, and the actual interaction force using a spring-mass-damping model. As an example, if the corrected desired trajectory is xrefm, and the current actual position of the end-effector is X, the positional deviation will be xrefm−x. Similarly, if the desired speed of the end-effector is {dot over (x)}ref, and the actual speed is {dot over (X)}, the speed deviation will be {dot over (x)}ref−x. In order to realize the compliance control of the robotic arm, the robotic arm may be equivalent to a spring-mass-damping system, and the corresponding impedance control quantity may be calculated based on the spring-mass-damping model. As an example, if the stiffness matrix of the spring in the impedance control is Kd(which is equivalent to the stiffness of the robotic arm), the inertial matrix is Md, the damping matrix is Dd, and the actual interaction force between the end-effector and the environment is Fext, the damping control quantity Δ{umlaut over (x)}cmdcalculated based on the above-mentioned spring-mass-damping model will be: Δ{umlaut over (x)}cmd=Md−1(Dd({dot over (x)}ref−{dot over (x)})+Kd(xMm−x)+Fext) S122: converting the joint force information from the joint space into the task space, and calculating an acceleration control quantity in the task space based on the damping control quantity, the desired acceleration of the end-effector, and the converted joint force information of the robotic arm. As an example, the acceleration control quantity {umlaut over (x)}cmdmay be calculated using the impedance control quantity calculated in the above-mentioned step S121while taking the external torque acting on the robotic arm into account through an equation of: {umlaut over (x)}cmd={umlaut over (x)}ref+Δ{umlaut over (x)}cmd−JM(q)−1τext; where, {umlaut over (x)}refis the desired acceleration, Δ{umlaut over (x)}cmdis the damping control quantity, τextis the external joint torque acting on the robotic arm, M(q) is a n*n positive definite symmetric square matrix which represents the inertial matrix (i.e., the inertia matrix) of the robotic arm, and J is a m*n Jacobian matrix. S123: calculating the impedance control torque of the robotic arm in the joint space based on the acceleration control quantity in the task space using a Jacobian matrix. Since the calculated acceleration control quantity is for the end-effector in the task space, in order to realize the torque control of the joint, it needs to be further converted into the torque control quantity of the joint space, that is, the above-mentioned impedance control torque. In which, J#is the pseudo-inverse of the Jacobian matrix J. As an example, the impedance control torque τcmdmay be calculated through an equation of: τcmd=M(q)J#{umlaut over (x)}cmd S130: constructing a dynamics equation of the robotic arm, and determining a compensation torque based on a nonlinear term in the dynamics equation. In this embodiment, considering that the impedance control method is for the torque-controlled robotic arm, while the torque-controlled robotic arms have high nonlinearity, the impedance control method achieves the direction decoupled impedance control while compensates the non-linear terms such as the Coriolis force, centrifugal force and the gravity term in the joint space or the task space in real time. In the first embodiment, the dynamics equation is a dynamics equation of the robotic arm in the joint space. For example, the force acting on the robotic arm, the centrifugal force, the Coriolis force, the gravity and the like in the joint space can be calculated based on the joint motion information of each joint of the robotic arm that is obtained in the joint space. As an example, the dynamics equation of the joint space is: M(q){umlaut over (q)}+C(q,{dot over (q)}){dot over (q)}+G(q)=τc+τext; where, C(q,{dot over (q)}) is a n*n centrifugal and Coriolis matrix, which can be calculated according to the joint angular displacement q, the joint angular velocity {dot over (q)}, and other joint motion information, G(q) is a n*1 column vector composed of the gravity acting on each link of the robotic arm, τcrepresents the total control torque acting on the robotic arm, which is a n*1 column vector, and τextrepresents a column vector composed of the external torque acting on the robotic arm, which is a n*1 column vector. Therefore, a compensation will be performed on the nonlinear terms including Coriolis force, centrifugal force, gravity term and the like. At this time, the compensation term τcompensationis C(q,{dot over (q)}){dot over (q)}+G(q). In the second embodiment, the dynamics equation is a dynamics equation of the robotic arm in the task space. As an example, the construction of the dynamics equation of the task space includes: constructing the dynamics equation of the robotic arm in the joint space first, and then converting the dynamics equation in the joint space into the dynamics equation in the task space based on a motion relationship between each joint of the robotic arm and the end-effector. For example, according to the dynamics equation of the above-mentioned joint space, the dynamics equation of the task space will be: J#TM(q)J#{umlaut over (x)}cmd+J#T(C(q,{dot over (q)}){dot over (q)}−M(q)J#{dot over (J)})J#{dot over (x)}+J#TG(q)=J#Tτc+J#Tτext;where, J#Tis the inverse of the transpose of the Jacobian matrix J. Considering that there are two non-linear terms in the task space, that is, the first non-linear term (C(g,{dot over (q)}){dot over (q)}−M(q)J#{dot over (J)})J#{dot over (x)} containing the centrifugal force and Coriolis force matrix and the second non-linear term G(q) containing the gravity vector, the dynamics equation will be linearized by compensating the Coriolis force and centrifugal force term and the gravity term in the task space. As an example, the compensation torque τcompensationwill be: τcompensation=(C(q,{dot over (q)}){dot over (q)}−M(q)J#{dot over (J)})J#{dot over (x)}k+G(q) It should be noted that, as in the above-mentioned first embodiment, the dynamic compensation of the robotic arm is usually performed in the joint space to compensate the nonlinear terms including Coriolis force, centrifugal force, gravitational terms and the like, that is, the compensation term is C(q,{dot over (q)}){dot over (q)}+G(q). At this time, the compensated dynamics equation will be simplified to M(q){umlaut over (q)}=τc+τext. According to the kinematic conversion relationship between the joint and the end-effector of the robot (see the above-mentioned step S110), it can be known that the compensated dynamics equation of the joint space will be converted into the dynamics equation of the task space, that is, M(q)J#({umlaut over (x)}−{dot over (J)}{dot over (q)})=τc+τext. It can be seen that, for the task space, there are still non-linear terms related to joint speed. Considering that the task of the actual robotic arm is performed in the Cartesian space of the end-effector, the above-mentioned second embodiment directly compensate the nonlinear term in the task space in real time. By compensating the centrifugal force, the Coriolis force and the gravity term in the dynamics equation of the task space, the dynamics equation of the task space will be simplified to J#TM(q)J#{umlaut over (x)}cmd=J#Tτc+J#Tτext. It can be seen from the above that, a simpler and more intuitive end-effector task planning can be realized based on the simplified dynamics equation of the task space. Moreover, it can be seen from the compensation terms in the task space that, the robotic arm can be compensated more in the task space, which achieves real linearization and greatly simplifies the control complexity of the torque-controlled robotic arm. S140: performing a joint torque control on the robotic arm based on the impedance control torque and the compensation torque. As an example, the calculated impedance control torque and the compensation torque are superimposed as the total control torque τcrequired by the robotic arm, that is, τc=τcmd+τcompensation, and then sent to each joint to perform corresponding torque control, thereby performing the corresponding task. FIG.6is a schematic diagram of an application of a control frame of a robotic arm according to an embodiment of the present disclosure. It should be noted that, considering that a task is usually performed at the end-effector of the robotic arm, and the impedance of the end-effector has coupling in all directions while the force of the end-effector cannot be accurately tracked. As shown inFIG.6, in this embodiment, the impedance control method introduces the actual external force information (including the interaction force realized at the end-effector and the external joint torque acting on the robotic arm) measured by a force sensor and the environment information into the task space, so that when the robot interacts with the external environment, it represents the compliance with direction decoupling. At the same time, the impedance control law is corrected using the corrected desired trajectory obtained by combining the environmental information, and accurate end-effector force tracking can also be realized, which enhances the application of the impedance control when the force of the end-effector needs to be controlled. For example, when the end-effector of the robotic arm is controlled to write, the output force of the end-effector can be easily controlled to achieve different strengths of writing. At the same time, in this embodiment, the impedance control method also compensates the nonlinear term in dynamics in real time, especially performing direct compensation in the task space, which can greatly simplify the control complexity and well solve the nonlinearity, the complex control and other issues of the torque-controlled robotic arm. For example, in the torque-controlled robotic arm shown inFIG.1, the impedance control method of this embodiment is adopted, that is, when the direction decoupled impedance control is adopted, since the actual external force information collected by the force sensor and the environment information are used to correct the control law, the influence of the inertial matrix (or the inertia matrix) at the end-effector is eliminated.FIG.7is a schematic diagram of a trajectory obtained in a practical testing of an impedance control method of an embodiment of the present disclosure. As shown inFIG.7, it can be seen from the trajectory of the x-y plane that the force in the z direction does not affect the motion in the x-y direction, and the desired trajectory in the figure can be obtained.FIG.8is a schematic diagram of tracking end-effector forces obtained through an impedance control method of an embodiment of the present disclosure. As shown inFIG.8, by taking the environment model into account, an accurate ION desired force tracking can be achieved. Embodiment 2 FIG.9is a schematic block diagram of an impedance control apparatus according to an embodiment of the present disclosure. As shown inFIG.9, in this embodiment, an impedance control apparatus10is provided based on the impedance control method of the above-mentioned embodiment 1. The apparatus10includes: an information obtaining module110configured to obtain current joint motion information and current joint force information in a joint space of the robotic arm and an actual interaction force acting on an end-effector of the robotic arm, and calculating actual motion information of the end-effector in the task space of the robotic arm based on the joint motion information through forward kinematics; an impedance control quantity calculating module120configured to calculate a corrected desired trajectory of the end-effector using current environment information and a pre-planned desired end-effector interaction force in the task space, and calculating the impedance control torque of the robotic arm in the joint space based on the joint force information, the actual interaction force, the actual motion information of the end-effector, and desired end-effector information including the corrected desired trajectory; a compensation amount calculating module130configured to construct a dynamics equation of the robotic arm, and determining a compensation torque based on a nonlinear term in the dynamics equation; and a torque control module140configured to perform a joint torque control on the robotic arm based on the impedance control torque and the compensation torque. It can be understood that, the above-mentioned impedance control apparatus10corresponds to the impedance control method of embodiment 1. Any optional function in embodiment 1 are also applicable to this embodiment, and will not be described in detail herein. The present disclosure further provides an impedance controller for controlling a robotic arm (or a robot). The controller includes a processor and a storage. In which, the storage stores a computer program, and the processor is for executing the computer program to implement the impedance control method of embodiment 1 or the function of each module of the impedance control apparatus10of embodiment 2. The present disclosure further provides a robot including the above-mentioned impedance controller, which performs robotic arm torque control through the impedance controller, thereby realizing compliant control and the like. In one embodiment, the robot may be any robot such as a cutting robot, a welding robot, a grinding robot, and a massage robot. The present disclosure further provides a non-transitory computer-readable storage medium for storing the computer program used in the above-mentioned impedance controller. In the embodiments provided in the present disclosure, it should be understood that, the disclosed device (apparatus)s and method may also be implemented in other manners. The device embodiments described above are only schematic. For example, the flowcharts and schematic diagrams in the drawings show the possible architectures, functions, and operations according to the devices, methods, and computer program products of the embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of codes, and the module, program segment, or part of codes contains one or more executable instructions for realizing the specified logic functions. It should also be noted that, in alternative implementations, the functions marked in the blocks may also execute in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed basically in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that, each block in the schematic diagram and/or flowchart and the combination of the blocks in the schematic diagram and/or flowchart can be realized using a dedicated hardware-based system that executes specified functions or actions, or be realized using a combination of the dedicated hardware and computer instructions. In addition, the functional modules or units in each embodiment of the present disclosure may be integrated together to form an independent part, or each module or unit may exist alone, or two or more modules or units may be integrated to form an independent part. In the case that function(s) are implemented in the form of a software functional unit and sold or utilized as a separate product, they can be stored in a non-transitory computer readable storage medium. Based on this understanding, the technical solution of the present disclosure, either essentially or in part, contributes to the prior art, or a part of the technical solution can be embodied in the form of a software product. The software product is stored in a storage medium, which includes a plurality of instructions for enabling a computer device (which can be a smart phone, a personal computer, a server, a network device, or the like) to execute all or a part of the steps of the methods described in each of the embodiments of the present disclosure. The above-mentioned storage medium includes a variety of media such as a USB disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, and an optical disk which is capable of storing program codes The forgoing is only the specific embodiment of the present disclosure, while the scope of the present disclosure is not limited thereto. For those skilled in the art, changes or replacements that can be easily conceived within the technical scope of the present disclosure should be included within the scope of the present disclosure. | 29,651 |
11858142 | DESCRIPTION OF EMBODIMENT With reference to the accompanying drawings, a mode for implementing the present invention (exemplary embodiment) will be described in detail. The present invention is not intended to be limited by the content described in the following embodiment. Furthermore, constituent elements described in the following include elements easily achieved by a person skilled in the art or elements being substantially the same as the constituent elements. Moreover, the constituent elements described in the following can be combined as appropriate. FIG.1is a diagram schematically illustrating a configuration of a manipulation system according to the embodiment. A manipulation system10is a system to manipulate a sample, which is a minute object, under microscope observation. InFIG.1, the manipulation system10includes a microscope unit12, a first manipulator14, a second manipulator16, and a controller (control device)43that controls the manipulation system10. The first manipulator14and the second manipulator16are separately arranged such that the first manipulator14is on one side of the microscope unit12and the second manipulator14on the other side of the microscope unit12. The microscope unit12includes a camera18including an imaging element, a microscope20, and a sample stage22. The sample stage22can support a sample holding member11such as a petri dish, and the microscope20is arranged directly above the sample holding member11. The microscope unit12has a structure in which the microscope20and the camera18are integrated, and includes a light source (depiction omitted) that emits light toward the sample holding member11. The camera18may be provided separately from the microscope20. In the sample holding member11, a solution containing a sample is accommodated. The solution is, for example, paraffin oil. When the sample of the sample holding member11is irradiated with light and light reflected by the samples of the sample holding member11is incident on the microscope20, an optical image of the sample is magnified by the microscope20and thereafter captured by the camera18. Based on the image captured by the camera18, the sample can be observed. As illustrated inFIG.1, the first manipulator14includes a first pipette holding member24, an X-Y axis table26, a Z-axis table28, a drive device30that drives the X-Y axis table26, and a drive device32that drives the Z-axis table28. The first manipulator14is a manipulator having a triaxial configuration, i.e., an X-Y-Z axis configuration. In the present embodiment, a direction in the horizontal plane is referred to as an X-axis direction, a direction that intersects with the X-axis direction is referred to as a Y-axis direction, and a direction that intersects with each of the X-axis direction and the Y-axis direction (i.e., vertical direction) is referred to as a Z-axis direction. The X-Y axis table26is movable in the X-axis direction or the Y-axis direction by drive of the drive device30. The Z-axis table28is arranged to be movable up and down on the X-Y axis table26and is movable in the Z-axis direction by drive of the drive device32. The drive devices30and32are connected to the controller43. The first pipette holding member24is coupled to the Z-axis table28, and a first pipette25, which is a capillary tip, is attached at a front end. The first pipette holding member24can move in accordance with the movement of the X-Y axis table26and the Z-axis table28in a three-dimensional space as a moving area, and can hold the sample accommodated in the sample holding member11via the first pipette25. That is, the first manipulator14is a holding manipulator used for holding a minute object, and the first pipette25is a holding pipette used as a holding means of the minute object. The second manipulator16illustrated inFIG.1includes a second pipette holding member34, an X-Y axis table36, a Z-axis table38, a drive device40that drives the X-Y axis table36, and a drive device42that drives the Z-axis table38. The second manipulator16is a manipulator having the triaxial configuration, i.e., the X-Y-Z axis configuration. The X-Y axis table36is movable in the X-axis direction or the Y-axis direction by drive of the drive device40. The Z-axis table38is arranged to be movable up and down on the X-Y axis table36and is movable in the Z-axis direction by drive of the drive device42. The drive devices40and42are connected to the controller43. The second pipette holding member34is coupled to the Z-axis table38, and a second pipette35made of glass is attached at the front end. The second pipette holding member34can move in accordance with the movement of the X-Y axis table36and the Z-axis table38in a three-dimensional space as a moving area, and can artificially manipulate the sample accommodated in the sample holding member11. That is, the second manipulator16is an operation manipulator used for the manipulation (such as injection operation of a DNA solution and piercing operation) of the minute object, and the second pipette35is an injection pipette used as an injection operation means of the minute object. The X-Y axis table36and the Z-axis table38are configured as a coarse-motion mechanism (three-dimensional moving table) that drives the second pipette holding member34to coarsely move to a manipulation position of the sample or the like that is accommodated in the sample holding member11. Further, a micro-motion mechanism44as a nano-positioner is provided at a coupling portion between the Z-axis table38and the second pipette holding member34. The micro-motion mechanism44is configured to support the second pipette holding member34movably in a longitudinal direction (axial direction) thereof and also to micro-drive the second pipette holding member34along the longitudinal direction (axial direction) thereof. FIG.2is a cross-sectional view illustrating one example of the micro-motion mechanism. As illustrated inFIG.2, the micro-motion mechanism44includes a piezoelectric actuator44athat drives the second pipette holding member34as a driving target. The piezoelectric actuator44aincludes a cylindrical housing87, roller bearings80and82provided inside the housing87, and a piezoelectric element92. The second pipette holding member34is inserted through the housing87in the axial direction. The roller bearings80and82rotatably support the second pipette holding member34. The piezoelectric element92expands and contracts along the longitudinal direction of the second pipette holding member34in accordance with a voltage applied thereto. The second pipette35(seeFIG.1) is attached and fixed to the second pipette holding member34on a front end side (left side inFIG.2). The second pipette holding member34is supported by the housing87via the roller bearings80and82. The roller bearing80includes an inner ring80a, an outer ring80b, and balls80cprovided between the inner ring80aand the outer ring80b. The roller bearing82includes an inner ring82a, an outer ring82b, and balls82cprovided between the inner ring82aand the outer ring82b. Each of the outer rings80band82bis fixed to the inner circumferential surface of the housing87, and each of the inner rings80aand82ais fixed to the outer circumferential surface of the second pipette holding member34via a hollow member84. In this manner, the roller bearings80and82rotatably support the second pipette holding member34. A flange portion84aprojecting outward in a radial direction is provided at a substantially central portion in the axial direction of the hollow member84. The roller bearing80is arranged on the front end side in the axial direction of the second pipette holding member34with respect to the flange portion84a, and the roller bearing82is arranged on a rear end side with respect to the flange portion84a. The inner ring80aof the roller bearing80and the inner ring82aof the roller bearing82are arranged so as to sandwich the flange portion84aserving as an inner ring spacer. The second pipette holding member34is threaded on the outer circumferential surface, and a locknut86and a locknut86are screwed to the second pipette holding member34from the front end side of the inner ring80aand the rear end side of the inner ring82a, respectively. Thus, the positions in the axial direction of the roller bearings80and82are fixed. An annular spacer90is arranged on the rear end side in the axial direction of the outer ring82bcoaxially with the roller bearings80and82. On the rear end side in the axial direction of the spacer90, the annular piezoelectric element92is arranged substantially coaxially with the spacer90. On the further rear end side in the axial direction, a lid88of the housing87is arranged. The lid88is for fixing the piezoelectric element92in the axial direction and has a hole portion through which the second pipette holding member34is inserted. The lid88may be fastened to the side surface of the housing87, for example, by bolts, which are not illustrated. The piezoelectric elements92, having a rod-like shape or a prismatic shape, may be arranged in a substantially regular interval in the circumferential direction of the spacer90, or the piezoelectric element92may have a square tube having a hole portion through which the second pipette holding member34is inserted. The piezoelectric element92is in contact with the roller bearing82via the spacer90. The piezoelectric element92is connected to the controller43as a control circuit via lead wires (not illustrated). The piezoelectric element92is configured to expand and contract along the axial direction of the second pipette holding member34in response to a voltage applied from the controller43, and finely move the second pipette holding member34along the axial direction thereof. When the second pipette holding member34finely moves along the axial direction, this fine movement is transmitted to the second pipette35(seeFIG.1) and the position of the second pipette35is finely adjusted. Further, when the second pipette holding member34vibrates in the axial direction by the piezoelectric element92, the second pipette35also vibrates in the axial direction. In this manner, the micro-motion mechanism44enables a more accurate operation in manipulating (injection operation of a DNA solution or a cell, piercing operation, and the like) a minute object, and improvement in a piercing action by the piezoelectric element92can be achieved. While it has been described that the above-described micro-motion mechanism44is provided on the second manipulator16for manipulating a minute object, a micro-motion mechanism54, which is the same as the micro-motion mechanism44, may be provided on the first manipulator14for fixing the minute object as illustrated inFIG.1, or it can be omitted. Subsequently, the control of the manipulation system10performed by the controller43will be described with reference toFIG.3.FIG.3is a control block diagram of the manipulation system. The controller43includes hardware resources such as a CPU (central processing unit) as an arithmetic means, and a hard disk, a RAM (random access memory), and a ROM (read-only memory) as a storage means. The controller43performs various calculations based on a predetermined program stored in a storage unit46B, and outputs drive signals in accordance with the calculation result so that a control unit46A performs various controls. The control unit46A controls a focusing mechanism81of the microscope unit12, the drive device30, the drive device32, and a suction pump29of the first manipulator14, the drive device40, the drive device42, the piezoelectric element92, and an injection pump39of the second manipulator16, and outputs respective drive signals via drivers and amplifiers provided as needed. The control unit46A supplies corresponding drive signals Vxyand Vz(seeFIG.1) to the drive devices30,32,40, and42. The drive devices30,32,40, and42performs drive in the X-Y-Z-axis directions based on the corresponding drive signals Vxyand Vz. The control unit46A may supply a nano-positioner control signal VN(seeFIG.1) to the micro-motion mechanism44to control the micro-motion mechanism44. The controller43is connected to a joystick47as an information input means and to an input unit49such as a keyboard, a mouse, and a touch panel. The controller43is further connected to a display unit45such as a liquid crystal panel. Microscope images acquired by the camera18and various control screens are displayed on the display unit45. When a touch panel is used as the input unit49, the touch panel may be used so as to overlap the display screen of the display unit45, and an operator may perform an input operation while checking the display image of the display unit45. A known joystick can be used for the joystick47. The joystick47includes a base and a handle portion standing erect from the base, and operating the handle portion to tilt can cause the drive devices30and40to perform X-Y drive and twisting the handle portion can cause the drive devices32and42to perform Z-drive. The joystick47may include a button47A for operating each drive of the suction pump29, the piezoelectric element92, and the injection pump39. As illustrated inFIG.3, the controller43further includes an image input unit43A, an image processor43B, an image output unit43C, and a position detector43D. An image signal Vpix (seeFIG.1) imaged by the camera through the microscope20is input to the image input unit43A. The image processor43B receives the image signal from the image input unit43A and performs image processing. The image output unit43C outputs image information subjected to the image processing by the image processor43B to the display unit45. The position detector43D can detect, based on the image information after image processing, the position of a cell and the like, which is a minute object imaged by the camera18, and the position of a nucleus of the cell that is a manipulation target on which an injection operation by the second pipette35is performed. The position detector43D can detect the presence of the cells and the like in the imaging area of the camera18based on the image information. Further, the position detector43D may detect the positions of the first pipette25and the second pipette35. The image input unit43A, the image processor43B, the image output unit43C, and the position detector43D are controlled by the control unit46A. The control unit46A controls, based on the positional information from the position detector43D and the information on the presence of the cells and the like, the first manipulator14and the second manipulator16. In the present embodiment, the control unit46A automatically drives the first manipulator14and the second manipulator16in a predetermined sequence. Such sequence drive is performed by the control unit46A sequentially outputting the corresponding drive signals based on the calculation result of the CPU by a predetermined program stored in the storage unit46B in advance. Next, with reference toFIG.4andFIG.5, a method of detecting a manipulation target of a sample that is a minute object and a method of determining a manipulation execution position IJ, an insertion start position D of the second pipette35, and a push-in position P will be described. In the present embodiment, the samples are cells100. The cells100are pre-nucleus fertilized ova. The manipulation to the cell100is an injection operation of a DNA solution. In the present embodiment, the insertion direction of the second pipette35to the cell100is parallel to the X-axis direction. An intersecting direction orthogonal to the insertion direction is parallel to the Y-axis direction. FIG.4andFIG.5are schematic diagrams illustrating one example of the cell of the manipulation target and the nucleus. The cells100are accommodated in the sample holding member11. In the present embodiment, the sample holding member11includes an untreated cell region where untreated cells100are placed, a successful cell region where the cells100of successful manipulation are placed, and a failure cell region where the cells100of failed manipulation are placed. The cell100includes a cell membrane102and nuclei110. The cell membrane102is a biological membrane that separates the inside and the outside of the cell100. The cell membrane102has fluidity and, by contact with the tip of the second pipette35, deforms and hardens. The nucleolus110is present inside the cell100covered with the cell membrane102. The nucleus110includes a nuclear membrane112and a nucleolus114. The nucleolus114is present inside the nucleus110covered with the nuclear membrane112. The cell100is, in a state of being held by the first pipette25, subjected to the injection operation by the second pipette35. In the injection of a DNA solution and the like, the DNA solution and the like needs to be injected into the inside of the nuclear membrane112. Because the nuclear membrane112is low in contrast and the shape is indefinite, detection by common image processing methods such as edge extraction processing and the like is difficult. Thus, the nucleolus114of higher contrast than the nuclear membrane112is detected, and based on the position of the nucleolus114, the manipulation execution position IJ of injection is determined. The image data of the cells100is imaged by the camera18illustrated inFIG.3. The image data of the cells100imaged by the camera18is sent to the image processor43B from the image input unit43A as an image signal. The image processor43B performs image processing on the image data of the cells100. The image processor43B performs binarization processing and filter processing on the image signal received from the image input unit43A, in order to detect the positions and shapes of the cells100and the nucleoli114. The image processor43B gray-scales the image signal and, based on a predetermined threshold value, converts the grayscale image into a monochrome image. Then, based on the monochrome image obtained by the binarization processing and the filter processing, the image processor43B performs edge extraction processing and pattern matching. The position detector43D can, based on the processing result thereof, detect the positions and shapes of the cells100and the nucleoli114. Specifically, the controller43detects the radius R of the cell100, the center C1of the cell100, the radius r of the nucleolus114, and the center C2of the nucleolus114based on the image data. The controller43moves, based on the detection result, the first pipette25, thereby moving the center C1of the cell100to a preset origin (0, 0). The controller43may define the coordinates of an X-Y plane of the center C1of the cell100as the origin (0, 0). The controller43calculates, based on the positions of the center C1of the cell100and the center C2of the nucleolus114in the detection result, the coordinates (x1, y1) of the center C2of the nucleolus114. In the present embodiment, x1=0. The manipulation execution position IJ of injection is a position of the tip of the second pipette35when the second pipette35performs an injection operation on the cell100. The coordinates (x2, y2) of the manipulation execution position IJ of injection are determined by the coordinates (x1, y1) of the center C2of the nucleolus114, the radius r of the nucleolus114, and an offset amount α. The offset amount α is an arbitrary setting value that is preset. The coordinates (x2, y2) of the manipulation execution position IJ are, with respect to the coordinates (x1, y1) of the center C2of the nucleolus114, a position moved by (r+α) in the Y-axis direction toward the center C1of the cell100. The X coordinate x2 of the manipulation execution position IJ is calculated by x2=x1. In the present embodiment, x2=0. The Y coordinate y2 of the manipulation execution position IJ, when y1≥0, is calculated by y2=y1−(r+α). The Y coordinate y2 of the manipulation execution position IJ, when y1<0, is calculated by y2=y1+(r+α). The insertion start position D of the second pipette35is a position where an insertion operation of the second pipette35to the cell100is started. The second pipette35is inserted toward the insertion start position D and in parallel with the X-axis. As illustrated inFIG.4, the coordinates (x3, y3) of the insertion start position D of the second pipette35are determined by the coordinates (x2, y2) of the manipulation execution position IJ and an initial distance L0. The initial distance L0is the distance in the X-axis direction between the center C1of the cell100and an initial position IP of the tip of the second pipette35. The initial distance L0is greater than the radius R of the cell100. The Y coordinate y3 of the insertion start position D is calculated by y3=y2. The X coordinate x3 of the insertion start position D is calculated by x3=−L0. The push-in position P of the second pipette35is a position of the tip of the second pipette35immediately before piercing the cell membrane102of the cell100with the tip of the second pipette35. The push-in position P is on the inner side of the cell membrane102in an initial shape of the cell100. The cell membrane102is pressed and deformed by the second pipette35until the tip of the second pipette35reaches the push-in position P from coming in contact with the cell membrane102. As illustrated inFIG.5, the coordinates (x4, y4) of the push-in position P of the second pipette35are determined by the coordinates (x2, y2) of the manipulation execution position IJ, the coordinates (x3, y3) of the insertion start position D, a predetermined push-in amount L, and a predetermined offset amount β. The push-in amount L is an arbitrary setting value that is preset. The offset amount β is a value calculated in accordance with the positions of the Y coordinates y2 and y3 of the manipulation execution position IJ and the insertion start position D, respectively. The coordinates (x4, y4) of the push-in position P is, with respect to the coordinates (x3, y3) of the insertion start position D, a position moved by (L+β) in the X-axis direction toward the manipulation execution position IJ. The Y coordinate y4 of the push-in position P is calculated by y4=y2=y3. The X coordinate x4 of the push-in position P is calculated by x4=−L0+(L+β). The offset amount β is calculated by β=y4*tan{sin {circumflex over ( )}−1(y4/R)}. Like a nucleus110A illustrated inFIG.5, when the coordinates of a manipulation execution position IJA coincide with the center C1of the cell100, the offset amount β is β=0. Thus, the coordinates (x5, y5) of a push-in position PA are calculated by x5=−L0+L and y5=0. Next, a driving method of the manipulation system10will be described. Before starting the operation of the manipulation system10, an operator first arranges the first pipette25and the second pipette35within the field of view of the camera18illustrated inFIG.1. In this case, the height of the tip of the first pipette25is set to a position slightly above the bottom surface of the sample holding member11. The operator then, by using the focusing mechanism81of the microscope20, sets a focus on the first pipette25. The operator adjusts, in a state where the focus is set on the first pipette25, the height of the second pipette35so as to be focused on. The operator then moves the sample stage22so that the periphery of the cell100in the sample holding member11overlaps with the field of view of the camera18. The operator further confirms that the cell100does not move even if the tip of the first pipette25is brought close to the cell100. This is to confirm that the suction pump29illustrated inFIG.3is in an equilibrium state. With the foregoing preparation, the cell100is placed in the vicinity of the first pipette25and the second pipette35. FIG.6is a flowchart illustrating one example of the operation of the manipulation system of the embodiment.FIG.7is an explanatory diagram explaining the operation of the manipulation system of the embodiment. The manipulation system10, for a plurality of cells100placed in the sample holding member11, performs manipulation for each piece of the cells100and repeatedly performs the manipulation on the multiple cells100. The controller43performs the manipulation on the multiple cells100automatically. The automatic manipulation by the manipulation system10is started by pressing a start button on PC software, for example. First, at Step ST10, the operator, after the manipulation system10has performed the manipulation multiple times, sets a manipulation termination number Ne that is the number of times to end the manipulation to the control unit46A of the controller43via the input unit49illustrated inFIG.3. Because the manipulation is performed for each piece of the cells100, the manipulation termination number Ne is the number of cells100to perform manipulation. When the manipulation termination number Ne is input to the control unit46A, at Step ST12, the control unit46A sets the manipulation execution number N that is the value of a counter of the number of operations that have been performed to N=0, and stores it in the storage unit46B of the controller43. Then, the image processor43B of the controller43performs image processing on the image data imaged by the camera18through the microscope20. The position detector43D of the controller43detects, by image processing, the position coordinates of the tip center of the first pipette25on the screen of the camera18and the position coordinates of the tip center of the second pipette35. At Step ST14, the control unit46A moves, by driving the first manipulator14, the first pipette25to a predetermined position on the basis of the detection result. The predetermined position is a position where the tip center of the first pipette25faces the cell100of the manipulation target. Subsequently, at Step ST16, the control unit46A drives the suction pump29of the first manipulator14to perform suction of the first pipette25. When the suction pump29is driven, the pressure in the first pipette25becomes negative, and a flow of the solution of the sample holding member11arises toward the opening of the first pipette25. The cell100is sucked together with the solution, and is adsorbed to the tip of the first pipette25and held. In this case, in order to confirm whether the cell100is being held, it may be determined by detecting, by image processing, whether the cell100is located in the vicinity of the tip of the first pipette25. Then, at Step ST18, the image processor43B acquires image data of the cell100. At Step ST20, the position detector43D detects, based on the acquired image data, the positions and shapes of the cell100and the nucleolus114by an image processing sequence. At Step ST22, the position detector43D determines whether the nucleolus114has been detected. At Step ST22, when determined that the nucleolus114has not been detected (No at Step ST22), the process returns to Step ST20and the image processor43B detects the positions and shapes of the cell100and the nucleolus114again by the image sequence. At Step ST22, when determined that the nucleolus114has not been detected, the process may return to Step ST18and the image processor43B may acquire the image data of the cell100again. At Step ST18, before acquiring the image data again, the control unit46A may change the posture of the cell100by temporarily releasing the holding of the cell100by the first pipette25. At Step ST22, when determined that the nucleolus114has been detected (Yes at Step ST22), the control unit46A calculates the coordinates (x1, y1) of the center C2of the nucleolus114. In the present embodiment, x1=0. At Step ST24, the control unit46A determines whether the Y coordinate y1 of the center C2of the nucleolus114is greater than or equal to the Y coordinate 0 of the center C1of the cell100. At Step ST24, when determined that the Y coordinate y1 of the center C2of the nucleolus114is greater than or equal to the Y coordinate 0 of the center C1of the cell100, at Step ST26, the Y coordinate y2 of the manipulation execution position IJ is calculated by y2=y1−(r+α). At Step ST24, when determined that the Y coordinate y1 of the center C2of the nucleolus114is smaller than the Y coordinate 0 of the center C1of the cell100, at Step ST28, the Y coordinate y2 of the manipulation execution position IJ is calculated by y2=y1+(r+α). After calculating the Y coordinate y2 of the manipulation execution position IJ at Step ST26and Step ST28, at Step ST30, the control unit46A determines the manipulation execution position IJ, the insertion start position D, and the push-in position P. Specifically, the control unit46A calculates the coordinates (x2, y2) of the manipulation execution position IJ, the coordinates (x3, y3) of the insertion start position D, and the coordinates (x4, y4) of the push-in position P and sets the moving path of the second pipette35. Then, at Step ST32, the control unit46A moves the tip of the second pipette35to the insertion start position D. Because the initial position IP of the second pipette35and the insertion start position D have the same X coordinate, the second pipette35is translated in the Y-axis direction. The tip of the second pipette35faces the manipulation execution position IJ. Then, at Step ST34, the control unit46A moves the tip of the second pipette35from the insertion start position D to the push-in position P at a first acceleration Vlow. The first acceleration Vlowis low acceleration or ultra-low acceleration. The control unit46A may move the tip of the second pipette35at a constant speed from the insertion start position D to the push-in position P. Because the insertion start position D and the push-in position P have the same Y coordinate, the second pipette35is translated in the X-axis direction. Until the tip of the second pipette35reaches the push-in position P from coming in contact with the cell membrane102, the cell membrane102is not pierced by the second pipette35but is pressed and deformed. As a result, the cell membrane102is given a tensile force. The cell membrane102, when sufficiently pressed, is in a hardened state due to the tensile force. The second pipette35may, after Step ST34, before Step ST36, wait at the push-in position P for a predetermined time. Then, at Step ST36, the control unit46A moves the tip of the second pipette35from the push-in position P to the manipulation execution position IJ at a second acceleration Vhigh. The second acceleration Vhighis at least greater than the first acceleration Vlow. The second acceleration Vhighis high acceleration. Because the push-in position P and the manipulation execution position IJ have the same Y coordinate, the second pipette35is translated in the X-axis direction. Because the cell membrane102is in a hardened state at Step ST34, the cell membrane102results, by moving the second pipette35at high speed, in local destruction due to an impact load and is pierced. The tip of the second pipette35is inserted into the cell membrane102. The tip of the second pipette35is inserted into the nuclear membrane112. Then, at Step ST38, the control unit46A drives the injection pump39of the second manipulator16and performs the injection operation of a DNA solution and the like on the cell100. The control unit46A may perform the injection operation, by driving the injection pump39for a preset time, for example. The image processor43B may determine, by performing image processing during the injection operation and detecting a bulge of the nuclear membrane112, whether the injection of a DNA solution and the like has been completed. After performing the injection operation, at Step ST40, the control unit46A moves the second pipette35to the initial position IP. Specifically, the second pipette35is pulled out from the cell100by moving in the X-axis direction, and thereafter, returns to the initial position IP by moving in the Y-axis direction. Subsequently, at Step ST42, the control unit46A determines whether the injection operation was successful. At Step ST42, when determined that the injection operation was successful (Yes at Step ST42), the control unit46A drives the sample stage22and moves the cell100after injection operation to the successful cell region. The control unit46A stops the suction pump29of the first manipulator14and stops the suction of the first pipette25. This causes the pressure in the first pipette25to be positive, and the first pipette25releases the holding of the cell100. The cell100is placed in the successful cell region. The control unit46A drives the sample stage22again and moves the tip of the first pipette25to the vicinity of the untreated cell region where the untreated cells100are placed. At Step ST42, when determined that the injection operation was failed (No at Step ST42), the control unit46A drives the sample stage22and moves the cell100after injection operation to the failure cell region. The control unit46A stops the suction pump29of the first manipulator14and stops the suction of the first pipette25. This causes the pressure in the first pipette25to be positive, and the first pipette25releases the holding of the cell100. The cell100is placed in the successful cell region. The control unit46A drives the sample stage22again and moves the tip of the first pipette25to the vicinity of the untreated cell region where the untreated cells100are placed. After Step ST44and Step ST46, at Step ST48, the control unit46A increments the value of the counter of the manipulation execution number N by one and stores it as N=N+1 in the storage unit46B of the controller43. At Step ST50, the control unit46A determines whether the manipulation execution number N has reached the manipulation termination number Ne. At Step ST50, when determined that the manipulation execution number N is smaller than the manipulation termination number Ne (No at Step ST50), the process returns to Step ST14and repeatedly performs the holding operation for another cell100, the detection operation of the cell100and the nucleolus114, the injection operation into the nuclear membrane112, and the placement operation of the cell100. At Step ST50, when the manipulation execution number N is greater than or equal to the manipulation termination number Ne (Yes at Step ST50), the manipulations for the predetermined number of cells100are finished and a series of operations is terminated. Because the captured image of the camera18is an image captured by imaging the X-Y plane at the focus position, the positions in the Z-direction between the tip of the second pipette35and the nucleolus114may not overlap. In this case, as the tip of the second pipette35is not inserted into the vicinity of the nucleolus114, a failure of the injection is assumed. In the present embodiment, when the injection was failed (No at Step St42), the cell100being held is moved to the failure cell region and the operation to the failed cell100is aborted. However, for example, the failed cell100may be moved to the untreated cell region and the process may return to Step ST14. For example, at Step ST16, the same cell100may be held by changing the posture or a different cell100may be held. As for the cell100for which the injection was failed, the operator may determine it, or the control unit46A may determine it based on a predetermined condition. As described above, the manipulation system10of the present embodiment includes the sample stage22configured such that a minute object is placed thereon, the first manipulator14including the first pipette25for holding the minute object, the second manipulator16including the second pipette35for operating the minute object that is held on the first pipette25, the microscope unit12(imaging unit) for imaging the minute object, and the controller43(control device) that controls the sample stage22, the first pipette25, the second pipette35, and the microscope unit12, and the controller43moves the tip of the second pipette35from the certain insertion start position D of the minute object to the certain push-in position P at a constant speed or the first acceleration Vlow, and after a predetermined time, moves the tip of the second pipette35from the push-in position P to the certain manipulation execution position IJ at the second acceleration Vhighgreater than the first acceleration Vlow. Accordingly, because the minute object is pressed at low speed until the tip of the second pipette35reaches the push-in position P from coming in contact with the minute object, the minute object is not pierced by the second pipette35but deformed. The sufficiently deformed minute object, by being pressed at high speed, results in local destruction due to an impact load and is pierced. According to such a manipulation system10, the piercing can be performed easily without damaging other tissues of the minute object. As a result, regardless of the degree of skill and technique of the operator, manipulation can be performed efficiently and suitably while suppressing damage to the minute object at the time of manipulation. In the manipulation system10of the present embodiment, the controller43determines the manipulation execution position IJ on the basis of the image data of the microscope unit12. According to such a manipulation system10, because the manipulation execution position IJ is determined based on the captured image data, the manipulation can be performed on the minute object efficiently and suitably, regardless of the degree of skill and technique of the operator. In the manipulation system10of the present embodiment, the controller43determines the insertion start position D on the basis of the image data of the microscope unit12and the manipulation execution position IJ. According to such a manipulation system10, because the insertion start position D is determined based on the captured image data and the manipulation execution position IJ that is determined based on the image data, the minute object can be pierced efficiently and suitably, regardless of the degree of skill and technique of the operator. In the manipulation system10of the present embodiment, the controller43determines the push-in position P on the basis of the image data of the microscope unit12and the manipulation execution position IJ. According to such a manipulation system10, because the push-in position P is determined based on the captured image data and the manipulation execution position IJ that is determined based on the image data, the minute object can be pierced efficiently and suitably, regardless of the degree of skill and technique of the operator. In the manipulation system10of the present embodiment, the minute object is the cell100. Accordingly, because the cell membrane102is pressed at low speed until the tip of the second pipette35reaches the push-in position P from coming in contact with the cell membrane102of the cell100, the cell membrane102is not pierced by the second pipette35but deformed. The cell membrane102, when sufficiently pressed, is in a hardened state due to the tensile force. The cell membrane102in a hardened state, by being pressed at high speed, results in local destruction due to an impact load and is pierced. According to such a manipulation system10, the cell membrane102can be pierced easily without damaging other tissues in the cell100. As a result, regardless of the degree of skill and technique of the operator, manipulation can be performed efficiently and suitably while suppressing damage to the cell100at the time of manipulation. In the manipulation system10of the present embodiment, the controller43detects the position of the nucleolus114of the cell100on the basis of the image data of the microscope unit12. According to such a manipulation system10, because the position of the nucleolus114of the cell100is detected based on the captured image data, the nucleolus114can be detected efficiently and suitably, regardless of the degree of skill and technique of the operator. In the manipulation system10of the present embodiment, the controller43determines the manipulation execution position IJ on the basis of the position of the nucleolus114. According to such a manipulation system10, because the manipulation execution position IJ is determined based on the position of the nucleolus114detected from the captured image data, the manipulation can be performed on the minute object efficiently and suitably, regardless of the degree of skill and technique of the operator. In the manipulation system10of the present embodiment, the manipulation execution position IJ is outside the nucleolus114. According to such a manipulation system10, by setting the manipulation execution position IJ outside the nucleolus114, injection operation can be performed without coming in contact with the nucleolus114with the tip of the second pipette35when inserting the second pipette35. Thus, in injection operation, damage to the cell100can be suppressed. In the manipulation system10of the present embodiment, the manipulation execution position IJ is a position offset from the center C2of the nucleolus. According to such a manipulation system10, the tip of the second pipette35can be prevented from damaging the nucleolus114when inserting the second pipette35. Thus, in injection operation, damage to the cell100can be suppressed. In the manipulation system10of the present embodiment, the distance in the intersecting direction orthogonal to the insertion direction of the second pipette35between the center C2of the nucleolus114and the manipulation execution position IJ is greater than the radius r of the nucleolus114. According to such a manipulation system10, because the manipulation execution position IJ can be set outside the nucleolus114, injection operation can be performed without coming in contact with the nucleolus114with the tip of the second pipette35when inserting the second pipette35. Thus, in injection operation, damage to the cell100can be suppressed. A driving method of the manipulation system10of the present embodiment is a drive method of the manipulation system10including the sample stage22configured such that the cell100is placed thereon, the first manipulator14including the first pipette25for holding the cell100, and the second manipulator16including the second pipette35for operating the cell100that is held on the first pipette25, and includes Step ST22of moving the tip of the second pipette35to the certain insertion start position D of the cell100, Step ST24of moving the tip of the second pipette35from the insertion start position D to the certain push-in position P at a constant speed or the first acceleration Vlow, and after a predetermined time, Step ST26of moving the tip of the second pipette35from the push-in position P to the certain manipulation execution position IJ at the second acceleration Vhighgreater than the first acceleration Vlow. Accordingly, because the minute object is pressed at low speed until the tip of the second pipette35reaches the push-in position P from coming in contact with the minute object, the minute object is not pierced by the second pipette35but deformed. The sufficiently deformed minute object, by being pressed at high speed, results in local destruction due to an impact load and is pierced. According to such a manipulation system10, the piercing can be performed easily without damaging other tissues of the minute object. As a result, regardless of the degree of skill and technique of the operator, manipulation can be performed efficiently and suitably while suppressing damage to the minute object at the time of manipulation. The manipulation system10and the driving method of the manipulation system10of the present embodiment may be modified as appropriate. For example, it is preferable that the shapes and the like of the first pipette25, the second pipette35, and the like be changed as appropriate, depending on the type of minute object and the operation to the minute object. In the respective operations of the holding operation of a minute object, the detection operation of the certain manipulation target position, the injection operation, and the placement operation of the minute object, as appropriate, a part of the procedure may be omitted, or the procedure may be replaced and executed. REFERENCE SIGNS LIST 10MANIPULATION SYSTEM11SAMPLE HOLDING MEMBER12MICROSCOPE UNIT14FIRST MANIPULATOR16SECOND MANIPULATOR18CAMERA20MICROSCOPE22SAMPLE STAGE24FIRST PIPETTE HOLDING MEMBER25FIRST PIPETTE26X-Y AXIS TABLE28Z-AXIS TABLE29SUCTION PUMP30,32DRIVE DEVICE34SECOND PIPETTE HOLDING MEMBER35SECOND PIPETTE36X-Y AXIS TABLE38Z-AXIS TABLE39INJECTION PUMP40,42DRIVE DEVICE43CONTROLLER (CONTROL DEVICE)43A IMAGE INPUT UNIT43B IMAGE PROCESSOR43C IMAGE OUTPUT UNIT43D POSITION DETECTOR44,54MICRO-MOTION MECHANISM44aPIEZOELECTRIC ACTUATOR45DISPLAY UNIT46A CONTROL UNIT46B STORAGE UNIT47JOYSTICK47A BUTTON49INPUT UNIT80,82ROLLER BEARING80a,82aINNER RING80b,82bOUTER RING80c,82cBALL81FOCUSING MECHANISM84HOLLOW MEMBER84aFLANGE PORTION86LOCKNUT87HOUSING88LID90SPACER92PIEZOELECTRIC ELEMENT100CELL102CELL MEMBRANE110NUCLEUS112NUCLEAR MEMBRANE114NUCLEOLUSC1, C2CENTERR, r RADIUSIP INITIAL POSITIOND INSERTION START POSITIONP PUSH-IN POSITIONIJ MANIPULATION EXECUTION POSITIONL0INITIAL DISTANCEL PUSH-IN AMOUNTα, β OFFSET AMOUNT | 45,383 |
11858143 | While implementations are described herein by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or figures described. It should be understood that the figures and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean “including, but not limited to”. DETAILED DESCRIPTION An autonomous mobile device (AMD) such as a robot is capable of autonomous movement, allowing it to move from one location in a physical space to another without being “driven” or remotely controlled by a user. The AMD may perform tasks that involve moving within the physical space, interacting with users, and so forth. For example, the AMD may be commanded to follow a particular user as that user moves from one room to another. Several tasks, such as following a particular user, involve the identification of a particular user. This identification may be relative, in that the AMD is able to distinguish one person from another. The interaction between the user, the AMD, and the physical space and obstacles within it is dynamic and complex. For example, the user, the AMD, other users, pets, and so forth may be moving within the physical space. The AMD's view of the user may be obstructed due to obstacles in the physical space. The relative orientation of the user with respect to the AMD may change, presenting different views of the user. The pose of the user may change as they sit, stand, recline, and so forth. As a result, the process of accurately identifying a user is complex. The process of identifying a particular user may be used in conjunction with other activities, such as moving the AMD in the physical space or responding to a command. This interaction with the real-world means that a low latency determination of the identity of the user is important. For example, when commanded to follow a particular user, the AMD should be able to follow that user without hesitation and without having to stop and determine where the user is in a crowd of people. The resources of an AMD, such as availability of memory, processor, and power are constrained. Due to the complexity of the problem and the limited resources of the AMD, such identification and activities relying on that identification impose significant operational limitations. For example, due to relatively high latencies in identifying the user while following, the speed of the AMD may be limited to a slow crawl. However, this significantly limits the usefulness of the AMD. Described in this disclosure are systems and techniques for quickly and efficiently identifying a user based on sensor data obtained by the AMD. This identification may be relative, in that the AMD is able to distinguish one person from another. In some implementations the identification may be definitive, associating a particular person with a particular user account. One or more images of a scene are obtained by one or more cameras of the AMD. For example, a pair of cameras may be operated to provide stereovision and stereopairs of images (a left image and a right image) may be acquired. The images are processed by a user detection module to generate cropped images of people depicted in the image. A predicted location is calculated that indicates where each person is expected to be at a subsequent time. A comparison between the predicted location and a location in physical space at the subsequent time is used to calculate a proximity value. The proximity value indicates how closely the predicted location and the location in physical space agree. For example, if the predicted location for Person1 is within 10 centimeters (cm) of the location in physical space for Person1, the proximity value may be 0.95. The cropped images are processed to determine a feature vector (FV) that is representative of one or more features of the person depicted in the cropped image. In one implementation, the FV may be generated by processing the cropped image with an artificial neural network. This artificial neural network (ANN) may comprise a backbone of expanded convolutional modules with attention modules incorporated therein. For example, the ANN may combine a backbone of a neural network such as MobileNet v2 (Sandler, Mark, “MobileNetV2: Inverted Residuals and Linear Bottlenecks”, arxiv.org/pdf/1801.04381.pdf) with attention modules such as a HACNN inserted into that backbone (Li, Wei, “Harmonious Attention Network for Person Re-Identification”, al arxiv.org/pdf/1802.08122.pdf). This arrangement reduces computing requirements for processing and also decreases latency, resulting in FVs being generated more quickly than other methods. In comparison, use of the HACNN alone is precluded due to limited computational resources on the AMD104and long latency due to complex local branches. As images are acquired and processed, the resulting FVs are assessed for similarity to FVs included in gallery data. For example, a similarity value indicates how similar a FV is with respect to another FV in the gallery data. The gallery data comprises FVs associated with particular users. The gallery data for a particular user may be arranged into a homogenous set of FVs and a diverse set of FVs. The homogenous set of FVs are FVs of that user that have similarity values with respect to one another that are within a first range. For example, the homogenous set of FVs may be associated with images of the user from slightly different views, such as while the user is turning around. The homogenous set may omit storing FVs that are associated with less than a threshold similarity value. This reduces the amount of memory used by not storing FVs that are considered redundant. In one implementation the first range may extend from a similarity value of 0.8 to 0.9. The diverse set of FVs are FVs of the user that have similarity values with respect to one another that are within a second range. For example, the diverse set of FVs may be associated with images of the user from substantially different views, such as front, left, rear, right, seated, standing, and so forth. The diverse set may omit storing FVs that are associated with less than a threshold similarity value. This reduces the amount of memory used by not storing FVs that are considered redundant. In one implementation the second range may extend from a similarity value of 0.0 to 0.8. During operation, a FV of an unidentified person depicted in a cropped image is determined. The FV is then compared to the FVs in the gallery data to determine a similarity value. By using gallery data that includes the homogenous set and the diverse set of FVs, the likelihood of an accurate identification is improved. Because the homogenous set provides FVs that are relatively similar to one another, the likelihood of a high similarity value for an image of the same person is increased. Likewise, because the diverse set provides examples of FVs of the same user but in a variety of different appearances, a substantial change in the appearance of the user such as a change in pose or change in relative orientation with respect to the camera is still highly likely to result in a similarity value greater than a threshold value. For example, if the user transitions from walking to seated, a FV of an image of the user seated will have a high similarity value to a previously stored FV associated with the user being seated. As additional images and resulting FVs are obtained, older entries in the gallery data may be discarded and newer FVs stored. This assures that the gallery data contains current data to facilitate accurate identification of the user. Identification data that associates a user identifier with a particular person depicted in an image may be determined based on the proximity value and the similarity value. By using both the proximity value and the similarity value, overall accuracy is improved. As described earlier, the proximity value indicates how close the location in physical space of a first person is to a predicted location of a first user. If the first person is very close to the predicted location, it may be considered likely that the first person is the first user. If the similarity value of the FV of the image of the first person as compared to a previously stored FV in the gallery data exceeds a threshold value, it may also be considered likely that the first person is the first user. By combining both the proximity and the similarity, the overall likelihood of accurate identification is improved. Further reductions in latency and the computing resources used to determine the identification data are also possible. In one implementation, the proximity value may be used to selectively search the gallery data. For example, the gallery data for the user having a predicted location closest to the unidentified person may be processed first to determine similarity values. By using the techniques described, the AMD is able to quickly and accurately identify a user. This allows the AMD to perform tasks that use identification, such as finding or following a particular user in the physical space. As a result of these techniques, the operation of the AMD within the physical space is significantly improved. Illustrative System FIG.1illustrates a system100in which a physical space102includes an autonomous mobile device (AMD)104, according to some implementations. The AMD104may include sensors, such as one or more cameras106. The physical space102may include one or more users108. One or more obstacles110may be present within the physical space102. For example, obstacles110may comprise walls, furnishings, stairwells, people, pets, and so forth. These obstacles110may occlude a view, by the sensors on the AMD104, of the user108. For example, the user108(2) may move through a doorway and a wall may prevent the camera106from seeing the user108in the other room. The AMD104may use an identification module120to determine identification data. For example, the identification data may associate a particular user identifier with a particular person depicted in an image acquired by the camera106. In one implementation, the identification data may be relative. For example, the identification data may distinguish one person from another. In another implementation, the identification data may be absolute, such as identifying user108(2) as “John Smith.” The identification module120accepts as input sensor data from one or more sensors on or associated with the AMD104. For example, the camera(s)106on the AMD104may generate image data122. This image data122may comprise single images, stereopairs of images, and so forth. A user detection module124processes the image data122and generates cropped images126. For example, the user detection module124may use a first artificial neural network to determine portions of the image data122that are likely to include a person. In one implementation, the cropped image(s)126may comprise the image data122with data indicative of a bounding box designating a portion of the image data122that is deemed likely to depict a person. In another implementation, the cropped image(s)126may comprise a subset of the data from the image data122. In some implementations the cropped image(s)126may be resized and rescaled to predetermined values. In some implementations, an internal identifier may be associated with the person in the cropped image126. A motion prediction module128determines a location in physical space130and motion prediction data134indicative of a predicted location132of a person at a given time. In one implementation, the motion prediction module128may determine the location in physical space130of the person using the image data122. For example, a pair of cameras106may provide stereovision. Based on the disparity in apparent position of the user in a left image acquired from a left camera106(L) and a right image acquired from a right camera106(R), a location in the physical space102of the person may be determined relative to the cameras106. A set of locations in physical space130may be determined during some interval of time. Based on the set of locations in physical space130, a direction and velocity of movement of the person may be determined. For example, a Kalman filter may use the set of locations in physical space130to determine the predicted location132. In other implementations other techniques may be used to determine the motion prediction data134. For example, an ultrasonic sensor, lidar, depth camera, or other device may provide information about the location in physical space130and apparent motion of the person at a given time interval. This information may be used to determine the predicted location132. The motion prediction data134may be used to determine one or more proximity values136. A proximity value136may indicate how closely a predicted location132of a user108at a specified time corresponds to a location in physical space130of a person at the specified time. For example, a proximity value136may indicate that the location in physical space130of Person1 at time=2 is within 5 cm of the predicted location132of the user108(3). The proximity value136may be expressed as a distance, ratio, and so forth. The proximity value136may be specific to a particular combination of unidentified person and user108. For example, a first proximity value136may describe how closely the location in physical space130of Person1 at time=2 is to the predicted location132of the user108(3), while a second proximity value136may describe how closely the location in physical space130of Person1 at time=2 is to the predicted location132of the user108(2). A feature vector module150accepts as input a cropped image126and produces as output a feature vector (FV)152. The feature vectors152are representative of features in the cropped image126. The feature vector module150may comprise a second artificial neural network (ANN) that has been trained to generate feature vectors152. The second ANN may comprise a backbone of expanded convolutional modules with attention modules incorporated therein. For example, the ANN may combine a backbone of a neural network such as MobileNet v2 (Sandler, Mark, “MobileNetV2: Inverted Residuals and Linear Bottlenecks”, arxiv.org/pdf/1801.04381.pdf) with attention modules such as a HACNN (Li, Wei, “Harmonious Attention Network for Person Re-Identification”, al arxiv.org/pdf/1802.08122.pdf). The second ANN is discussed in more detail with regard toFIGS.3and4. A similarity module154determines a similarity value156that is indicative of similarity between two or more FVs152. In one implementation, the similarity value156may be calculated as a Manhattan distance in vector space. In other implementations other techniques may be used to determine the similarity value156. For example, the similarity value156may be based on a Euclidian distance between two FVs152. A gallery module160maintains gallery data162for one or more users108. The gallery data162comprises one or more FVs152that are associated with a user108. The gallery data162may maintain a homogenous set164of FVs152and a diverse set166of FVs152. In some implementations a separate set of gallery data162may be maintained for each user108. The gallery data162may include data associated with a face, entire body, portion of a body, and so forth. In other implementations the gallery data162may include data associated with autonomous devices, pets, and so forth. The homogenous set164of FVs152are FVs152of that user108that have similarity values156with respect to one another that are within a first range. For example, the homogenous set164of FVs152may be associated with images of the first user108from slightly different views, such as while the user108is turning around. In one implementation the homogenous set164may include FVs152having a similarity value156of 0.8 to 0.9. The homogenous set164may omit storing FVs152that are associated with greater than a threshold similarity value156. For example, FVs152having a similarity value of greater than 0.9 may be discarded. This reduces the amount of memory used by not storing FVs152that are considered redundant. The diverse set166of FVs152are FVs152of the user108that have similarity values156with respect to one another that are within a second range. For example, the diverse set166of FVs152may be associated with images of the first user108from substantially different views, such as front, left, rear, right, seated, standing, and so forth. In one implementation the diverse set166may include FVs152having a similarity value156of less than 0.8. The diverse set166may omit storing FVs152that are associated with less than a threshold similarity value156. As above, this reduces the amount of memory used by not storing FVs152that are considered redundant. The gallery data162is discussed in more detail with regard toFIG.2. A comparison module168determines identification data170for a person depicted in the cropped image126. The comparison module168may use one or more proximity values136and one or more similarity values156to determine the identification data170. For example, a set of proximity values136for a second time may be determined. The largest proximity value136and associated user108(1) from that set may be determined. Continuing the example, the similarity module154may generate a set of similarity values156indicative of a comparison of the FV152from the cropped image126acquired at the second time to previously stored FVs152in the gallery data162. The largest similarity value156and associated user108(1) from that set may be determined. If a same user108(1) is associated with the largest proximity value136and the largest similarity value156, the person depicted in the cropped image126acquired at the second time may be asserted to be the user108(1). The comparison module168may then generate identification data170that indicates user108(1) is at the location in physical space130at the second time. In other implementations, the comparison module168may use other techniques to determine the identification data170. For example, the proximity values136and the similarity values156may be normalized and summed to determine an overall comparison value. The combination of a particular proximity value136and similarity value156having the greatest overall comparison value may be deemed to identify the user108depicted in the cropped image126. FIG.2illustrates a situation200in which gallery data162is used for identifying users108, according to some implementations. As cropped images126are processed by the feature vector module150, the resulting FVs152may be assessed by the gallery module160to determine whether those resulting FVs152will be included in gallery data162. The FVs152included in the gallery data162are associated with a particular user identifier202. For example, a set of gallery data162may comprise a set of FVs152that are associated with a single user identifier202. Several cropped images126are shown with their corresponding FVs152and relative similarity values156between pairs of FVs152. A candidate FV152is processed by the similarity module154to determine similarity values156of that candidate FV152to previously stored FVs152in the gallery data162. If the candidate FV152exhibits a similarity value156with any previously stored FV152that is greater than a first threshold, that candidate FV152is disregarded and would not be included in the gallery data162. For example, the discarded candidate FV152is deemed redundant with respect to the existing FVs152included in the gallery data162. If the candidate FV152exhibits a similarity value156that is within a first range, the candidate FV152may be included in the gallery data162within the homogenous set164. If the candidate FV152exhibits a similarity value156that is within a second range (exclusive of the first range), the candidate FV152may be included in the gallery data162within the diverse set166. In some implementations the gallery data162may include additional information. For example, the gallery data162may include orientation data indicative of an estimated orientation204of the user108. The estimated orientation204may be indicative of an estimated orientation of the user108relative to the camera106. For example, if the user108is directly facing the camera106in the image data122, the estimated orientation204may be 0 degrees. Continuing the example, if the user108is facing away from the camera106such that the back of their head and shoulders are in the image data122, the estimated orientation204may be 180 degrees. In one implementation the estimated orientation204may be determined using a trained ANN. In another implementation, one or more keypoints on the user108may be determined in the cropped images126. For example, keypoints may comprise left shoulder, right shoulder, face, and so forth. Based on the relative positioning of the keypoints in the image data122, the estimated orientation204may be determined. In some implementations, operation of the similarity module154may use the estimated orientation204. For example, an estimated orientation204of a person in the cropped image126may be determined. Subsequently, the similarity module154may generate similarity values156for those FVs152that are within a threshold range of the estimated orientation204. The process of determining whether to include a candidate FV152in the gallery data162is described in more detail with regard toFIG.7. FIG.3depicts a block diagram300of an artificial neural network (ANN) to generate the FVs152representative of features of a person depicted in an image, according to some implementations. The feature vector module150may use the ANN described herein to generate the FVs152. As depicted here, the architecture of the ANN may comprise a backbone of expanded convolutional modules with attention modules incorporated therein. For example, the ANN may use a backbone of a neural network such as MobileNet v2 (Sandler, Mark, “MobileNetV2: Inverted Residuals and Linear Bottlenecks”, arxiv.org/pdf/1801.04381.pdf) into which attention modules such as a HACNN (Li, Wei, “Harmonious Attention Network for Person Re-Identification”, al arxiv.org/pdf/1802.08122.pdf) have been integrated. The backbone may comprise convolution modules302and expanded convolution modules304as described with regard to the MobileNet v2 architecture. The convolution modules302comprise a residual convolution block while the expanded convolution module304provides depth and spatial decoupling. This provides a depth-wise convolution that reduces processing time while retaining accuracy. The residual block nature of the MobileNet v2 architecture facilitates retention during processing of features with less feature fading. The attention modules306are included into the backbone. In this illustration, four attention modules306are shown. The attention modules306operate to provide a weight map that emphasizes the features of the person in the foreground while deemphasizing features in the background. The attention module306may employ at least two kinds of attention. Spatial attention attempts to suppress background features and emphasize foreground features with respect to the physical space. In comparison, channel attention attempts to suppress features that not likely to be associated with the person, while emphasizing features that are likely associated with the person. As a crude example, spatial attention uses where in physical space something is to try and determine if it is in the foreground and should be considered. In comparison, channel attention determines if something looks like a background and should be disregarded. In addition to passing data from the expanded convolution module304to the attention module306, residual data is provided to the expanded convolution module304that is after the attention module306. For example, the attention module306may comprise a “harmonious attention module”. (See Li, Wei, “Harmonious Attention Network for Person Re-Identification”, al arxiv.org/pdf/1802.08122.pdf). In the implementation depicted here, the attention modules306may be inserted after the 5th, 8th, 12th, and 15thexpanded convolution modules304of the MobileNet v2 architecture. In other implementations, the attention modules306may be used elsewhere within the backbone. By using this architecture, an overall improvement in accuracy of identification of approximately 5% was observed during testing. The attention modules306are discussed in more detail with regard toFIG.4. FIG.4depicts a block diagram400of an attention module306of the neural network ofFIG.3, according to some implementations. The output from the expanded convolution module304(1), such as a feature cube, is provided as input to the attention module306. The attention module306uses a spatial attention algorithm and a channel attention algorithm to produce output data such as a feature weight map. Each attention module306comprises a spatial attention module402that implements the spatial attention algorithm and a channel attention module406that implements the channel attention algorithm. The spatial attention module402comprises a plurality of convolution modules404. In one implementation, the spatial attention module402may comprise a first convolution module404and a second convolution module404that operate in series to regress the input to a first output data. The spatial attention module402may determine a first feature weight map indicative of weights associated with portions of the input, with the weight indicating a likelihood that those portions contain useful detail. It may be conceptualized that the first feature weight map indicates which pixels in the data should be considered most important. The values in the first feature weight map may be between 0 and 1. Additional modules may be present in the spatial attention module402. For example, the spatial attention module402may include a reduce mean module, the two convolution modules404, and a resize module. Input to the spatial attention module402may comprise a feature cube “F5” provided from the expanded convolution module304and may have a size as indicated by [Batch×Height×Width×Channel]=[1×32×16×32], and a spatial size is [H, W]=[32×16]. The spatial attention module402takes F5 as input, the reduce mean module determines an average one of the channels, then the output is passed through the two convolution modules404. As a result, the feature cube changes as [1×32×16×32], [1×32×16×1], [1×16×8×1], [1×32×16×1], and [1×32×16×1]. In this example, the final output of the spatial attention module402is a 1 channel map of the same size as the feature cube. The channel attention module406accepts the output, such as the feature cube, from the preceding expanded convolution module304which is provided as input to the channel attention module406, and generates second output data. The channel attention algorithm implemented by the channel attention module406may determine a second feature weight map indicative of weights associated with channels of the input. It may be conceptualized that the second feature weight map indicates which channels should be considered most important. The values in the first feature weight map may be between 0 and 1. The channel attention module406comprises an average pool module408to perform a pooling function on the input. A feature cube is provided as input to the average pool module408. The feature cube may comprise a weight vector of the same length as the number of channels. Output from the average pool module408is connected to a first convolution module410. Output from the first convolution module410is provided as input to a second convolution module410. The second convolution module410generates the second output data. The second output data may comprise an attention cube having the same dimensions as the feature cube. For example, the feature cube “F5” from the expanded convolution module304may be provided as input to the average pool module408. The feature cube is of size [Batch×Height×Width×Channel]=[1×32×16×32]. As a result, the feature cube changes as [1×32×16×32], [1×1×1×32], [1×1×1×2], and [1×1×1×32]. The output of the channel attention module406thus comprises a weight vector of the same length as the number of channels. The first output data and the second output data are provided as input to a convolution module412that generates third output data. The third output data is provided as input to the subsequent expanded convolution module304(2). In one implementation, the convolution module412may process the first feature weight map and the second feature weight map to generate a third feature weight map. For example, the third feature weight map may comprise a saliency weight map. The subsequent expanded convolution module304(2) may also accept as input at least a portion of the output from the prior expanded convolution module304(1). The neural network described with regard toFIGS.3and4is also referred to as “HAMNET”. One implementation of HAMNET is described in the computer program listing appendix that is submitted herewith. This computer program listing is expressed in the Python programming language as promulgated by the Python Software Foundation at python.org and using the TensorFlow environment as described in Martin Abadi, et al, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015 and promulgated at tensorflow.org. In some implementations, parameters of the attention modules306may be modified based on the position within the backbone of the architecture. For example, for a second attention module306, the input data is of size [1×16×8×64], and the weight map provided as output has the same dimensionality. The neural network used to generate the feature vectors152may be trained using standard cross-entropy loss and triplet loss. Images from the same person may be treated as one class, so cross-entropy loss enforces correct recognition of different people into different classes. Triplet loss measures the pairwise distance between different people, and penalizes a pairwise distance that is less than a threshold. The triplet loss enforces that a distance in vector space between feature vectors152of different people will be greater than a predefined threshold. FIG.5is a block diagram500of the AMD104, according to some implementations. The AMD104may include one or more batteries502to provide electrical power suitable for operating the components in the AMD104. In some implementations other devices may be used to provide electrical power to the AMD104. For example, power may be provided by wireless power transfer, capacitors, fuel cells, storage flywheels, and so forth. One or more clocks504may provide information indicative of date, time, ticks, and so forth. For example, a processor506may use data from the clock504to associate a particular time with an action, control operation of the lights, display device, the sensor data542, and so forth. The AMD104may include one or more hardware processors506(processors) configured to execute one or more stored instructions. The processors506may comprise one or more cores. The processors506may include microcontrollers, systems on a chip, field programmable gate arrays, digital signal processors, graphic processing units, general processing units, and so forth. The AMD104may include one or more communication interfaces508such as input/output (I/O) interfaces510, network interfaces512, and so forth. The communication interfaces508enable the AMD104, or components thereof, to communicate with other devices or components. The communication interfaces508may include one or more I/O interfaces510. The I/O interfaces510may comprise Inter-Integrated Circuit (I2C), Serial Peripheral Interface bus (SPI), Universal Serial Bus (USB) as promulgated by the USB Implementers Forum, RS-232, and so forth. The I/O interface(s)510may couple to one or more I/O devices514. The I/O devices514may include input devices such as one or more of a sensor516, keyboard, mouse, scanner, and so forth. The I/O devices514may also include output devices518such as one or more of a motor, speaker, display device, printer, and so forth. I/O devices514are described in more detail with regard toFIG.6. In some embodiments, the I/O devices514may be physically incorporated with the AMD104or may be externally placed. The network interfaces512may be configured to provide communications between the AMD104and other devices such as other AMDs104, docking stations, routers, access points, and so forth. The network interfaces512may include devices configured to couple to personal area networks (PANs), local area networks (LANs), wireless local area networks (WLANS), wide area networks (WANs), and so forth. For example, the network interfaces512may include devices compatible with Ethernet, Wi-Fi, Bluetooth, Bluetooth Low Energy, ZigBee, and so forth. The AMD104may also include one or more busses or other internal communications hardware or software that allow for the transfer of data between the various modules and components of the AMD104. As shown inFIG.5, the AMD104includes one or more memories520. The memory520may comprise one or more non-transitory computer-readable storage media (CRSM). The CRSM may be any one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, a mechanical computer storage medium, and so forth. The memory520provides storage of computer-readable instructions, data structures, program modules, and other data for the operation of the AMD104. A few example functional modules are shown stored in the memory520, although the same functionality may alternatively be implemented in hardware, firmware, or as a system on a chip (SoC). The memory520may include at least one operating system (OS) module522. The OS module522is configured to manage hardware resource devices such as the I/O interfaces510, the I/O devices514, the communication interfaces508, and provide various services to applications or modules executing on the processors506. The OS module522may implement a variant of the FreeBSD operating system as promulgated by the FreeBSD Project; other UNIX or UNIX-like variants; a variation of the Linux operating system as promulgated by Linus Torvalds; the Windows operating system from Microsoft Corporation of Redmond, Washington, USA; the AMD Operating System (ROS) as promulgated at www.ros.org, and so forth. Also stored in the memory520may be a data store524and one or more of the following modules. These modules may be executed as foreground applications, background tasks, daemons, and so forth. The data store524may use a flat file, database, linked list, tree, executable code, script, or other data structure to store information. In some implementations, the data store524or a portion of the data store524may be distributed across one or more other devices including other AMDs104, servers, network attached storage devices, and so forth. A communication module526may be configured to establish communication with other devices, such as other AMDs104, an external server, a docking station, and so forth. The communications may be authenticated, encrypted, and so forth. Other modules within the memory520may include a safety module528, the identification module120, an autonomous navigation module530, one or more task modules532, or other modules534. The modules may use data stored within the data store524, including safety tolerance data540, sensor data542including the image data122, input data544, the cropped image(s)126, the motion prediction data134, the proximity values136, the feature vectors152, similarity values156, the gallery data162, identification data170, an occupancy map546, thresholds548, path plan data550, other data552, and so forth. The safety module528may access the safety tolerance data540to determine within what tolerances the AMD104may operate safely within the physical space102. For example, the safety module528may be configured to reduce the speed at which the AMD104moves as it approaches an obstacle110. In another example, the safety module528may access safety tolerance data540that specifies a minimum distance from a person that the AMD104is to maintain. Movement of the AMD104may be stopped by one or more of inhibiting operations of one or more of the motors, issuing a command to stop motor operation, disconnecting power from one or more of the motors, and so forth. The safety module528may be implemented as hardware, software, or a combination thereof. The autonomous navigation module530provides the AMD104with the ability to navigate within the physical space102without real-time human interaction. The autonomous navigation module530may implement, or operate in conjunction with, a mapping module to determine an occupancy map546, a navigation map, or other representation of the physical space102. In one implementation, the mapping module530may use one or more simultaneous localization and mapping (“SLAM”) techniques. The SLAM algorithms may utilize one or more of maps, algorithms, beacons, or other techniques to navigate. The autonomous navigation module530may use the navigation map to determine a set of possible paths along which the AMD104may move. One of these may be selected and used to determine path plan data550indicative of a path. For example, a possible path that is the shortest or has the fewest turns may be selected and used to determine the path. The path is then subsequently used to determine a set of commands that drive the motors connected to the wheels. For example, the autonomous navigation module530may determine the current location within the physical space102and determine path plan data550that describes the path to a destination location such as the docking station. The autonomous navigation module530may utilize various techniques during processing of sensor data542. For example, image data122obtained from cameras106on the AMD104may be processed to determine one or more of corners, edges, planes, and so forth. In some implementations, corners may be detected and the coordinates of those corners may be used to produce point cloud data. This point cloud data may then be used for SLAM or other purposes associated with mapping, navigation, and so forth. The AMD104may move responsive to a determination made by an onboard processor506, in response to a command received from one or more communication interfaces508, as determined from the sensor data542, and so forth. For example, an external server may send a command that is received using the network interface512. This command may direct the AMD104to proceed to find a particular user108, follow a particular user108, and so forth. The AMD104may then process this command and use the autonomous navigation module530to determine the directions and distances associated with carrying out the command. For example, the command to “come here” may result in a task module532sending a command to the autonomous navigation module530to move the AMD104to a particular location near the user108and orient the AMD104in a particular direction. The AMD104may connect to a network using one or more of the network interfaces512. In some implementations, one or more of the modules or other functions described here may execute on the processors506of the AMD104, on the server, or a combination thereof. For example, one or more servers may provide various functions, such as automated speech recognition (ASR), natural language understanding (NLU), providing content such as audio or video to the AMD104, and so forth. The other modules534may provide other functionality, such as object recognition, speech synthesis, user identification, and so forth. The other modules534may comprise a speech synthesis module that is able to convert text data to human speech. For example, the speech synthesis module may be used by the AMD104to provide speech that a user108is able to understand. The data store524may store the other data552as well. For example, localization settings may indicate local preferences such as language, user identifier data may be stored that allows for identification of a particular user108, and so forth. FIG.6is a block diagram600of some components of the AMD104such as network interfaces512, sensors516, and output devices518, according to some implementations. The components illustrated here are provided by way of illustration and not necessarily as a limitation. For example, the AMD104may utilize a subset of the particular network interfaces512, output devices518, or sensors516depicted here, or may utilize components not pictured. One or more of the sensors516, output devices518, or a combination thereof may be included on a moveable component that may be panned, tilted, rotated, or any combination thereof with respect to a chassis of the AMD104. The network interfaces512may include one or more of a WLAN interface602, PAN interface604, secondary radio frequency (RF) link interface606, or other608interface. The WLAN interface602may be compliant with at least a portion of the Wi-Fi specification. For example, the WLAN interface602may be compliant with at least a portion of the IEEE 802.11 specification as promulgated by the Institute of Electrical and Electronics Engineers (IEEE). The PAN interface604may be compliant with at least a portion of one or more of the Bluetooth, wireless USB, Z-Wave, ZigBee, or other standards. For example, the PAN interface604may be compliant with the Bluetooth Low Energy (BLE) specification. The secondary RF link interface606may comprise a radio transmitter and receiver that operate at frequencies different from or using modulation different from the other interfaces. For example, the WLAN interface602may utilize frequencies in the 5.4 GHz and 5 GHz Industrial Scientific and Medicine (ISM) bands, while the PAN interface604may utilize the 5.4 GHz ISM bands. The secondary RF link interface606may comprise a radio transmitter that operates in the 900 MHz ISM band, within a licensed band at another frequency, and so forth. The secondary RF link interface606may be utilized to provide backup communication between the AMD104and other devices in the event that communication fails using one or more of the WLAN interface602or the PAN interface604. For example, in the event the AMD104travels to an area within the physical space102that does not have Wi-Fi coverage, the AMD104may use the secondary RF link interface606to communicate with another device such as a specialized access point, docking station, or other AMD104. The other608network interfaces may include other equipment to send or receive data using other wavelengths or phenomena. For example, the other608network interface may include an ultrasonic transceiver used to send data as ultrasonic sounds, a visible light system that communicates by modulating a visible light source such as a light-emitting diode, and so forth. In another example, the other608network interface may comprise a wireless wide area network (WWAN) interface or a wireless cellular data network interface. Continuing the example, the other608network interface may be compliant with at least a portion of the 4G, 5G, 6G, LTE, or other standards. The AMD104may include one or more of the following sensors516. The sensors516depicted here are provided by way of illustration and not necessarily as a limitation. It is understood that other sensors516may be included or utilized by the AMD104, while some sensors516may be omitted in some configurations. A motor encoder610provides information indicative of the rotation or linear extension of a motor680. The motor680may comprise a rotary motor, or a linear actuator. In some implementations, the motor encoder610may comprise a separate assembly such as a photodiode and encoder wheel that is affixed to the motor680. In other implementations, the motor encoder610may comprise circuitry configured to drive the motor680. For example, the autonomous navigation module530may utilize the data from the motor encoder610to estimate a distance traveled. A suspension weight sensor612provides information indicative of the weight of the AMD104on the suspension system for one or more of the wheels or the caster. For example, the suspension weight sensor612may comprise a switch, strain gauge, load cell, photodetector, or other sensing element that is used to determine whether weight is applied to a particular wheel, or whether weight has been removed from the wheel. In some implementations, the suspension weight sensor612may provide binary data such as a “1” value indicating that there is a weight applied to the wheel, while a “0” value indicates that there is no weight applied to the wheel. In other implementations, the suspension weight sensor612may provide an indication such as so many kilograms of force or newtons of force. The suspension weight sensor612may be affixed to one or more of the wheels or the caster. One or more bumper switches614provide an indication of physical contact between an object and a bumper or other member that is in mechanical contact with the bumper switch614. The safety module528utilizes sensor data542obtained by the bumper switches614to modify the operation of the AMD104. For example, if the bumper switch614associated with a front of the AMD104is triggered, the safety module528may drive the AMD104backwards. A floor optical motion sensor (FOMS)616provides information indicative of motion of the AMD104relative to the floor or other surface underneath the AMD104. In one implementation, the FOMS616may comprise a light source such as a light-emitting diode (LED), an array of photodiodes, and so forth. In some implementations, the FOMS616may utilize an optoelectronic sensor, such as a low-resolution two-dimensional array of photodiodes. Several techniques may be used to determine changes in the data obtained by the photodiodes and translate this into data indicative of a direction of movement, velocity, acceleration, and so forth. In some implementations, the FOMS616may provide other information, such as data indicative of a pattern present on the floor, composition of the floor, color of the floor, and so forth. For example, the FOMS616may utilize an optoelectronic sensor that may detect different colors or shades of gray, and this data may be used to generate floor characterization data. The floor characterization data may be used for navigation. An ultrasonic sensor618utilizes sounds in excess of 50 kHz to determine a distance from the sensor516to an object. The ultrasonic sensor618may comprise an emitter such as a piezoelectric transducer and a detector such as an ultrasonic microphone. The emitter may generate specifically timed pulses of ultrasonic sound while the detector listens for an echo of that sound being reflected from an object within the field of view. The ultrasonic sensor618may provide information indicative of a presence of an object, distance to the object, relative velocity, direction of movement, and so forth. Two or more ultrasonic sensors618may be utilized in conjunction with one another to determine a location within a two-dimensional plane of the object. In some implementations, the ultrasonic sensor618or a portion thereof may be used to provide other functionality. For example, the emitter of the ultrasonic sensor618may be used to transmit data and the detector may be used to receive data transmitted that is ultrasonic sound. In another example, the emitter of an ultrasonic sensor618may be set to a particular frequency and used to generate a particular waveform such as a sawtooth pattern to provide a signal that is audible to an animal, such as a dog or a cat. An optical sensor620may provide sensor data542indicative of one or more of a presence or absence of an object, a distance to the object, or characteristics of the object. The optical sensor620may use time-of-flight (ToF), structured light, interferometry, or other techniques to generate the distance data. For example, ToF determines a propagation time (or “round-trip” time) of a pulse of emitted light from an optical emitter or illuminator that is reflected or otherwise returned to an optical detector. By dividing the propagation time in half and multiplying the result by the speed of light in air, the distance to an object may be determined. The optical sensor620may utilize one or more sensing elements. For example, the optical sensor620may comprise a 4×4 array of light sensing elements. Each individual sensing element may be associated with a field of view (FOV)118that is directed in a different way. For example, the optical sensor620may have four light sensing elements, each associated with a different 100 FOV, allowing the sensor to have an overall FOV of 40°. In another implementation, a structured light pattern may be provided by the optical emitter. A portion of the structured light pattern may then be detected on the object using a sensor516such as an image sensor or camera106. Based on an apparent distance between the features of the structured light pattern, the distance to the object may be calculated. Other techniques may also be used to determine distance to the object. In another example, the color of the reflected light may be used to characterize the object, such as whether the object is skin, clothing, flooring, upholstery, and so forth. In some implementations, the optical sensor620may operate as a depth camera, providing a two-dimensional image of a scene, as well as data that indicates a distance to each pixel. Data from the optical sensors620may be utilized for collision avoidance. For example, the safety module528and the autonomous navigation module530may utilize the sensor data542indicative of the distance to an object in order to prevent a collision with that object. Multiple optical sensors620may be operated such that their FOV overlap at least partially. To minimize or eliminate interference, the optical sensors620may selectively control one or more of the timing, modulation, or frequency of the light emitted. For example, a first optical sensor620may emit light modulated at 60 kHz while a second optical sensor620emits light modulated at 63 kHz. A lidar622sensor provides information indicative of a distance to an object or portion thereof by utilizing laser light. The laser is scanned across a scene at various points, emitting pulses which may be reflected by objects within the scene. Based on the time-of-flight distance to that particular point, sensor data542may be generated that is indicative of the presence of objects and the relative positions, shapes, relative velocity, direction of movement, and so forth that are visible to the lidar622. Data from the lidar622may be used by various modules. For example, the autonomous navigation module530may utilize point cloud data generated by the lidar622for localization of the AMD104within the physical space102. The AMD104may include a mast. For example, a light682may be mounted on the mast. A mast position sensor624provides information indicative of a position of the mast of the AMD104. For example, the mast position sensor624may comprise limit switches associated with the mast extension mechanism that indicate whether the mast is at an extended or retracted position. In other implementations, the mast position sensor624may comprise an optical code on at least a portion of the mast that is then interrogated by an optical emitter and a photodetector to determine the distance to which the mast is extended. In another implementation, the mast position sensor624may comprise an encoder wheel that is attached to a mast motor that is used to raise or lower the mast. The mast position sensor624may provide data to the safety module528. For example, if the AMD104is preparing to move, data from the mast position sensor624may be checked to determine if the mast is retracted, and if not, the mast may be retracted prior to beginning movement. A mast strain sensor626provides information indicative of a strain on the mast with respect to the remainder of the AMD104. For example, the mast strain sensor626may comprise a strain gauge or load cell that measures a side-load applied to the mast or a weight on the mast or downward pressure on the mast. The safety module528may utilize sensor data542obtained by the mast strain sensor626. For example, if the strain applied to the mast exceeds a threshold amount, the safety module528may direct an audible and visible alarm to be presented by the AMD104. The AMD104may include a modular payload bay. A payload weight sensor628provides information indicative of the weight associated with the modular payload bay. The payload weight sensor628may comprise one or more sensing mechanisms to determine the weight of a load. These sensing mechanisms may include piezoresistive devices, piezoelectric devices, capacitive devices, electromagnetic devices, optical devices, potentiometric devices, microelectromechanical devices, and so forth. The sensing mechanisms may operate as transducers that generate one or more signals based on an applied force, such as that of the load due to gravity. For example, the payload weight sensor628may comprise a load cell having a strain gauge and a structural member that deforms slightly when weight is applied. By measuring a change in the electrical characteristic of the strain gauge, such as capacitance or resistance, the weight may be determined. In another example, the payload weight sensor628may comprise a force sensing resistor (FSR). The FSR may comprise a resilient material that changes one or more electrical characteristics when compressed. For example, the electrical resistance of a particular portion of the FSR may decrease as the particular portion is compressed. In some implementations, the safety module528may utilize the payload weight sensor628to determine if the modular payload bay has been overloaded. If so, an alert or notification may be issued. One or more device temperature sensors630may be utilized by the AMD104. The device temperature sensors630provide temperature data of one or more components within the AMD104. For example, a device temperature sensor630may indicate a temperature of one or more the batteries502, one or more motors680, and so forth. In the event the temperature exceeds a threshold value, the component associated with that device temperature sensor630may be shut down. One or more interlock sensors632may provide data to the safety module528or other circuitry that prevents the AMD104from operating in an unsafe condition. For example, the interlock sensors632may comprise switches that indicate whether an access panel is open. The interlock sensors632may be configured to inhibit operation of the AMD104until the interlock switch indicates a safe condition is present. An inertial measurement unit (IMU)634may include a plurality of gyroscopes636and accelerometers638arranged along different axes. The gyroscope636may provide information indicative of rotation of an object affixed thereto. For example, a gyroscope636may generate sensor data542that is indicative of a change in orientation of the AMD104or a portion thereof. The accelerometer638provides information indicative of a direction and magnitude of an imposed acceleration. Data such as rate of change, determination of changes in direction, speed, and so forth may be determined using the accelerometer638. The accelerometer638may comprise mechanical, optical, micro-electromechanical, or other devices. For example, the gyroscope636in the accelerometer638may comprise a prepackaged solid-state unit. A magnetometer640may be used to determine an orientation by measuring ambient magnetic fields, such as the terrestrial magnetic field. For example, the magnetometer640may comprise a Hall effect transistor that provides output compass data indicative of a magnetic heading. The AMD104may include one or more location sensors642. The location sensors642may comprise an optical, radio, or other navigational system such as a global positioning system (GPS) receiver. For indoor operation, the location sensors642may comprise indoor position systems, such as using Wi-Fi Positioning Systems (WPS). The location sensors642may provide information indicative of a relative location, such as “living room” or an absolute location such as particular coordinates indicative of latitude and longitude, or displacement with respect to a predefined origin. A photodetector644provides sensor data542indicative of impinging light. For example, the photodetector644may provide data indicative of a color, intensity, duration, and so forth. A camera106generates sensor data542indicative of one or more images. The camera106may be configured to detect light in one or more wavelengths including, but not limited to, terahertz, infrared, visible, ultraviolet, and so forth. For example, an infrared camera106may be sensitive to wavelengths between approximately 700 nanometers and 1 millimeter. The camera106may comprise charge coupled devices (CCD), complementary metal oxide semiconductor (CMOS) devices, microbolometers, and so forth. The AMD104may use image data122acquired by the camera106for object recognition, navigation, collision avoidance, user communication, and so forth. For example, a pair of cameras106mounted on the AMD104to provide binocular stereo vision, with the sensor data542comprising images being sent to the identification module120, the autonomous navigation module530, and so forth. In another example, the camera106may comprise a 10 megapixel or greater camera that is used for videoconferencing or for acquiring pictures for the user108. One or more microphones646may be configured to acquire information indicative of sound present in the physical space102. In some implementations, arrays of microphones646may be used. These arrays may implement beamforming techniques to provide for directionality of gain. The AMD104may use the one or more microphones646to acquire information from acoustic tags, accept voice input from users108, determine a direction of an utterance, determine ambient noise levels, for voice communication with another user108or system, and so forth. An air pressure sensor648may provide information indicative of an ambient atmospheric pressure or changes in ambient atmospheric pressure. For example, the air pressure sensor648may provide information indicative of changes in air pressure due to opening and closing of doors, weather events, and so forth. An air quality sensor650may provide information indicative of one or more attributes of the ambient atmosphere. For example, the air quality sensor650may include one or more chemical sensing elements to detect the presence of carbon monoxide, carbon dioxide, ozone, and so forth. In another example, the air quality sensor650may comprise one or more elements to detect particulate matter in the air, such as the photoelectric detector, ionization chamber, and so forth. In another example, the air quality sensor650may include a hygrometer that provides information indicative of relative humidity. An ambient light sensor652may comprise one or more photodetectors or other light-sensitive elements that are used to determine one or more of the color, intensity, or duration of ambient lighting around the AMD104. An ambient temperature sensor654provides information indicative of the temperature of the ambient physical space102proximate to the AMD104. In some implementations, an infrared temperature sensor may be utilized to determine the temperature of another object at a distance. A floor analysis sensor656may include one or more components that are used to generate at least a portion of floor characterization data. In one implementation, the floor analysis sensor656may comprise circuitry that may be used to determine one or more of the electrical resistance, electrical inductance, or electrical capacitance of the floor. For example, two or more of the wheels in contact with the floor may include an allegedly conductive pathway between the circuitry and the floor. By using two or more of these wheels, the circuitry may measure one or more of the electrical properties of the floor. Information obtained by the floor analysis sensor656may be used by one or more of the safety module528, the autonomous navigation module530, the task module532, and so forth. For example, if the floor analysis sensor656determines that the floor is wet, the safety module528may decrease the speed of the AMD104and generate a notification alerting the user108. The floor analysis sensor656may include other components as well. For example, a coefficient of friction sensor may comprise a probe that comes into contact with the surface and determines the coefficient of friction between the probe and the floor. A caster rotation sensor658provides data indicative of one or more of a direction of orientation, angular velocity, linear speed of the caster, and so forth. For example, the caster rotation sensor658may comprise an optical encoder and corresponding target that is able to determine that the caster transitioned from an angle of 0° at a first time to 49° at a second time. The sensors516may include a radar660. The radar660may be used to provide information as to a distance, lateral position, relative velocity, and so forth, to an object. For example, the radar660may determine the distance to an object, such as a person. The sensors516may include a passive infrared (PIR) sensor662. The PIR662sensor may be used to detect the presence of users108, pets, hotspots, and so forth. For example, the PIR sensor662may be configured to detect infrared radiation with wavelengths between 8 and 14 micrometers. The AMD104may include other sensors664as well. For example, a capacitive proximity sensor may be used to provide proximity data to adjacent objects. Other sensors664may include radio frequency identification (RFID) readers, near field communication (NFC) systems, coded aperture cameras, and so forth. For example, NFC tags may be placed at various points within the physical space102to provide landmarks for the autonomous navigation module530. One or more touch sensors may be utilized to determine contact with a user108or other objects. The AMD104may include one or more output devices518. A motor680may be used to provide linear or rotary motion. For example, one or more motors680may be used to move the AMD104from one location to another. In another example, one or more motors680may move the display device686with respect to a chassis of the AMD104. A light682may be used to emit photons. For example, the light682may comprise a light emitting diode (LED), electroluminescent device, a quantum dot, laser, incandescent bulb, fluorescent tube, and so forth. A speaker684may be used to emit sound. The display device686may comprise one or more of a liquid crystal display, light emitting diode display, electrophoretic display, cholesteric liquid crystal display, interferometric display, and so forth. The display device686may include a backlight or a frontlight. The display device686may be used to present visible information such as graphics, pictures, text, and so forth. For example, the display device686may comprise an array of light emitting components, such as LEDs. In some implementations, the display device686may comprise a touchscreen that combines a touch sensor and a display device686. In some implementations, the AMD104may be equipped with a projector688. The projector688may be able to project an image on a surface, such as the floor, wall, ceiling, and so forth. In some implementations the projector688may be used to present the displayed image. A scent dispenser690may be used to emit one or more smells. For example, the scent dispenser690may comprise a plurality of different scented liquids that may be evaporated or vaporized in a controlled fashion to release predetermined amounts of each. One or more moveable component actuators692may comprise an electrically operated mechanism such as one or more of a motor, solenoid, piezoelectric material, electroactive polymer, shape-memory alloy, and so forth. The actuator controller may be used to provide a signal or other input that operates one or more of the moveable component actuators692to produce movement of the moveable component. In other implementations, other694output devices may be utilized. For example, the AMD104may include a haptic output device that provides output that produces particular touch sensations to the user108. Continuing the example, a motor680with an eccentric weight may be used to create a buzz or vibration to allow the AMD104to simulate the purr of a cat. FIG.7is a flow diagram700of a process for generating gallery data162, according to some implementations. The process may be implemented at least in part by one or more of the processors506on the AMD104, a docking station, one or more servers, or other devices. At702a first image of a first person is determined at a first time. For example, the camera106may acquire image data122at a first time. The first image may comprise a first cropped image126. The image data122may be processed by the user detection module124to determine the cropped images126. At704a first FV152of the first person as depicted in the first image is determined. For example, the cropped image126may be processed by the feature vector module150to determine a first FV152that is a candidate feature vector152. In some implementations an internal identifier may be associated with the person in the cropped image126. For example, the internal identifier may be associated with the person, but does not assert a particular identity to the person. At706a first similarity value156that is indicative of a similarity between the first FV152and a previously included FV152in gallery data162is determined. For example, the similarity module154may be used to determine a set of similarity values156comparing the first FV152to individual ones of the previously included FVs152in the gallery data162. At708the first similarity value156is compared to a first threshold. If the first similarity value156is greater than the first threshold, the process proceeds to710. For example, the first threshold may be indicative of a similarity value of 0.9. At710, the first FV152is disregarded. The comparison with the first threshold prevents redundant or duplicative FVs152that are substantially the same from being included in the gallery data162. This significantly reduces the memory requirements associated with storing the gallery data162. If the first similarity value156is less than the first threshold, the process proceeds to712. At712the first similarity value156is compared to a second threshold. The second threshold may have a value less than a value of the first threshold. If the first similarity value156is greater than the second threshold, the process proceeds to714. For example, the second threshold may be indicative of a similarity value of 0.8. At714, the first FV152is included in the homogenous set164of the gallery data162. The FVs152in the homogenous set164are similar to one another, but are not identical. If at712the first similarity value156is less than or equal to the second threshold, the process proceeds to716. The first threshold and the second threshold describe a first range. At716, the first FV152is included in the diverse set166of the gallery data162. In some implementations, storage in the diverse set166may be constrained using an additional comparison to determine that the similarity value156of FVs152in the diverse set166is greater than a third threshold value. This additional constraint may maintain a minimum level of diversity between FVs152in the diverse set166. As additional image data122is obtained and subsequent FVs152are generated, the gallery data162may be updated. The gallery data162may be constrained to a particular size. For example, each set of gallery data162for a given user108may be limited to 100 FVs152in the homogenous set164and 100 FVs152in the diverse set166. As additional FVs152are generated, existing FVs152may be replaced, with newest FVs152replacing oldest FVs152. FIG.8is a flow diagram800of a process for identifying a user108, according to some implementations. The process may be implemented at least in part by one or more of the processors506on the AMD104, a docking station, one or more servers, or other devices. At802a first location in physical space130of a first user108at a first time is determined. For example, the first location in physical space130of a previously identified first user108may be determined based on sensor data542acquired at the first time. The sensor data542may be acquired by sensors such one or more cameras106, ultrasonic sensors618, optical sensors620, lidar622, radar660, and so forth. At804a first predicted location132of the first user108at a second time is determined. For example, based on the first location in physical space data130from the first time, information indicative of a direction of travel, and an estimated velocity, the first predicted location132may be calculated. In some implementations Kalman filtering techniques may be used to determine the predicted location132. At806a second location in physical space130of a second person at the second time is determined. For example, the second location in physical space130may be determined based on sensor data542acquired at the second time. At808, based on the first predicted location132and the second location in physical space130, a proximity value136is determined. In one implementation, the proximity value136may indicate a reciprocal or multiplicative inverse of a distance between the first predicted location132and the second location in physical space130. In another implementation, the proximity value136may be determined based on a binning of the distance between the first predicted location132and the second location in physical space130. For example, a distance between of 0 cm to 5 cm may be associated with a proximity value136of 0.95, a distance between of 5 cm to 10 cm may be associated with a proximity value136of 0.8, and so forth. At810, a first image of the second person at the second time is acquired. For example, the camera106may acquire image data122that is processed to determine a cropped image126. At812a first FV152of the second person as depicted in the first image is determined. For example, the cropped image126is processed by the feature vector module150to determine the first FV152. At814a similarity value156is determined that is indicative of a similarity between the first FV152and one or more FVs152in the gallery data162. For example, the similarity module154may generate the similarity value156. At816identification data170is determined based on the proximity value136and the similarity value156. For example, if the proximity value136is greater than a first threshold value, it may be indicative of the second person being the first user108. Continuing the example, the similarity value156being greater than a second threshold value may indicate that the second person is the first user108. By using both comparisons, identity may be asserted that the second person is the first user108. The identification data170may include the user identifier202or other information that identifies the user108depicted in the image data122at the second time. The processes and methods discussed in this disclosure may be implemented in hardware, software, or a combination thereof. In the context of software, the described operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more hardware processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. Those having ordinary skill in the art will readily recognize that certain steps or operations illustrated in the figures above may be eliminated, combined, or performed in an alternate order. Any steps or operations may be performed serially or in parallel. Furthermore, the order in which the operations are described is not intended to be construed as a limitation. Embodiments may be provided as a software program or computer program product including a non-transitory computer-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described herein. The computer-readable storage medium may be one or more of an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, and so forth. For example, the computer-readable storage medium may include, but is not limited to, hard drives, floppy diskettes, optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. Further embodiments may also be provided as a computer program product including a transitory machine-readable signal (in compressed or uncompressed form). Examples of transitory machine-readable signals, whether modulated using a carrier or unmodulated, include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, including signals transferred by one or more networks. For example, the transitory machine-readable signal may comprise transmission of software by the Internet. Separate instances of these programs can be executed on or distributed across any number of separate computer systems. Thus, although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case, and a variety of alternative implementations will be understood by those having ordinary skill in the art. Additionally, those having ordinary skill in the art will readily recognize that the techniques described above can be utilized in a variety of devices, physical spaces, and situations. Although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the claims. | 75,670 |
11858144 | DETAILED DESCRIPTION The various embodiments described and illustrated are for the purpose of showing some example embodiments of the present invention and are not intended to limit in any way the scope of the present invention. In embodiments of the present invention, a system, method, and apparatus for autonomous body interaction system is provided. A system and method for a robot interacting with a human for the purpose of active application of pressure to specific points or regions of the human body. In an embodiment, the object manipulation system as shown inFIG.1is utilized. In embodiments, the system and method effects one or more of the following: localizes the position of the body (human or otherwise), detects the configuration of the body, identifies the surface regions of the body, predicts the underlying anatomy of the body, assesses the state of the body, plans manipulation of the body, and executes a predetermined and/or dynamically updated plan on the body. In embodiments, the system and method effects the following: localizes the position of the body (human or otherwise), detects the configuration of the body, identifies the surface regions of the body, predicts the underlying anatomy of the body, assesses the state of the body, plans manipulation of the body, and executes a predetermined and/or dynamically updated plan on the body. In embodiments, a system, method, and apparatus provide for a therapeutic massage plan applied by a robot with one or more arms, to the body of a human lying prone, face down, on a massage table. The robot arm(s) are positioned to the side of the table such that the workspace reach of each robot arm substantially covers the target regions of the therapeutic massage. In an embodiment, the mounting position of the robot arm(s) is determined to maximize the pressure application required for the therapeutic massage. In embodiments, the therapeutic massage or other plan can be effected on the front region or a side or other region of a body. In embodiments, the therapeutic massage or other plan can be effected on a body which is human, mammal, animal, or a non-living object. In embodiments, the therapeutic massage or other plan can be effected on non-living soft body objects. In an embodiment, the therapeutic massage plan includes effecting one or more interaction goals which are manipulation goals that specify additional data relevant to the manipulation and interaction with a body. The additional data included in the interaction goals includes, without limitation: light contact along the surface of the skin, moderate pressure contact to somewhat displace the skin from the subcutaneous fascia, higher pressure contact to displace, compress, and mobilize muscle tissue, and higher pressure contact to mobilize deeper tissue and skeletal structures. Additional data can be included. In embodiments, overlays can be added to these interaction goals which have additional changes in position or force that vary in time around a mean value of zero. Such overlays can inject additional therapeutic effects to an interaction goal which does not normally have it. Example overlay effects include: varying the position of goals in a plane parallel to the surface of the target body in a sinusoidal manner in a direction perpendicular to the target's muscle fiber (known as the cross-fiber direction), in a direction parallel to the muscle fiber, or in a circular manner; or varying the force goals in a sinusoidal manner in a direction normal to the target body (also referred to as percussive force or percussive force overlay). In embodiments, when manipulating soft objects, the additions to the controller allow for a mixture of goals to be desired and achieved in a way similar to what humans are capable of achieving. These goals are extended into complex plans that are focused on a therapeutic outcome. The complexities in terms of object manipulation and object modeling increase when considering a living body such as a human body versus any soft object. In embodiments, there is a tracking of the body as it moves and the system and method are configured to perform manipulation on the body while the body is in motion, maintaining contact with the body as it moves on its own and as the robot or tool causes it to move. In an embodiment, the table (or work table or surface) under the body or object is able to articulate the different parts of the body independently. Throughout the specification, the surface or structure supporting the body or object is referred to as “table” for ease of reference in embodiments but is not meant to be limited to a table. The supporting structure or surface can be the ground, a chair or other furniture, a highly adjustable table, or other structure or means of support. In an embodiment, the supporting structure or table includes handles that the user can grip with their hands. In an embodiment, the body interaction manipulation is configured to provide traction relative to the hand that is fixed in place holding the handle of the table. In an embodiment, the supporting structure or table includes stirrups into which the user can insert their feet. In an embodiment, the body interaction manipulation is configured to provide traction to the foot that is fixed in place and inserted into the stirrup. In an embodiment, the handle(s) are associated with the robot and the robot controls the interaction of the user's grip on the handle. For the hand that is holding the handle attached to the robot, the body interaction manipulation provides resistance and mobilization to the hand, arm, and shoulder. Force feedback is used to assess the resistance and add resistance for the user's body. The robot moves the user's hand, arm, and shoulder through a range of motion until there is resistance that reaches a predetermined or dynamic threshold. In an embodiment, the robot includes stirrups into which the user can insert their feet. For the foot that is in the stirrup attached to the robot, the body interaction manipulation provides resistance and mobilization to the foot, leg, and hip. Force feedback is used to assess the resistance and add resistance for the user's body. The robot moves the user's foot, leg, and hip through a range of motion until there is resistance that reaches a predetermined or dynamic threshold. FIG.1depicts the body interaction system, primarily comprising a touch point11, a body12and a table13. The body's volume and weight are supported by the table while the system moves the touch point into contact with and along the body. In an embodiment, the planned contact of the touch point with the body is defined by a plurality of interaction goals10. In an embodiment, the body is a human body. In an embodiment, the touch point is a robotic end effector specifically designed to mimic the hands, fingers, palm, forearm, and/or elbow of a human in form and function. In an embodiment, a body, soft body object or object, is lying on or supported by a flat surface or worktable14. At least one robot arm with at least one end effector or tool is employed. The worktable provides support for the body/object and the tool which acts upon the body/object. The tool moves through freespace and into physical contact with the object. Such contact between the tool and the body/object can be predetermined or planned through a plurality of specified manipulation goals or sequences of specific types of contact or pressure. In an embodiment, the interaction goals are a sequence of landmarks, both on the surface of the body and inside the body in close proximity to the surface. When the interaction goals are inside the body, this is a position that is achieved through deformation of the body, where the position is on the surface of the deformed or compressed body. The positioning of these interaction goals and the resulting deformation or compression of the body required to achieve these positions is associated with a specified beneficial change in the state or condition of part of the body or the whole of the body. In an embodiment, the interaction goals focus spatially on regions of the skin that are reliably correlated to underlying tissue properties. Examples of tissue properties include those associated with unwanted tension, adhesions, and trigger points. The manipulation of these tissue properties is generally associated with a positive change in the tissue state, including the release of unwanted tension, loosening of adhesions between tissue layers, and reflexive smoothing of muscles when stimulating specific nerves. In an embodiment, the targeting and the planning progress from easily-identifiable anatomical features to less-defined features and “hidden” features such as features within the body (e.g., bone structure, sensitivities, etc.). These techniques rely upon the feedback mechanism of manipulating the body while also recording new sensed information resulting from that manipulation. In an embodiment, the method and system include manipulation feedback mechanism with a robot control system that has a dynamic contact point or patch. FIG.2shows a body interaction procedure according to an embodiment of the present invention. InFIG.2, the localization step200locates and segments the body data in the sensed data. In an embodiment, the system and method can assign object labels for ease of reference and recording. InFIG.2, the pose step201determines the body position and body orientation. The shape step202estimates the body shape parameters based on the body data. The configuration step203estimates the body joint pose parameters based on the body data. In the model step204, the body data, body shape, and body joint pose are combined into a body model. In the anatomy step205, the body model is registered with an anatomical model to generate the anatomical mapping. In the physiology step206, the sensed physiologic signals of the body are related to the body model through a physiologic mapping. In the protocol step207, the body model, anatomical mapping, and physiologic mapping are utilized to generate the interaction protocol. In the plan step208, the interaction protocol is used to generate the interaction plan. In the goal step209, the interaction plan is used to generate the interaction goals. In the execute step210, the interaction goals are used to generate commands executed by the robot. In an embodiment, the localization step is implemented with a deep learning model trained to assign object labels to sensed data associated with a body. A further embodiment utilizes sensed RGB-D (red, green, and blue color-depth sensing) data from one or more sensors. RGB-D sensors work in association with an RGB sensor camera and are able to augment a convention image with depth information on a per-pixel basis. In an embodiment, the pose step is implemented with a deep learning inference model that detects the body's joint poses and semantically segments the sensed data, classifying data as body data. In an embodiment, the shape step and configuration step are performed jointly by fitting model parameters for both pose and shape. In an embodiment, the configuration step is implemented with a deep learning pose estimator. In an embodiment, the model step is implemented as a finite element analysis (FEA) model generated based on the body shape determined in the shape step. In an embodiment, the anatomy step utilizes a morphable anatomical body model that utilizes surface landmarks from the fitted model to spatially transform the anatomical body model. In an embodiment, the physiology step utilizes thermographic imagery to estimate the blood flow and oxygenation of parts of the body. In an embodiment, the physiology step utilizes a sensed force from sensors in the robotic arms to estimate the relative stiffness of parts of the body. The embodiments described herein are extensions of and improvements on soft object manipulation, allowing for the complexity of handling a living body. In an embodiment, an extension is an adaptation of the plan to the specific morphology of a body and consideration for its pose and articulation. In an embodiment, the protocol step has the user select from a variety of available protocols. Each protocol defines interaction segments. Interaction segments are defined by a set of interaction goals. In an embodiment, the system and method can occur with the following steps, and their subsequent metadata, in this order: the regions where the goals are located, the level of force, the duration, the style, and a generalized intensity level. The output of this embodiment is an ordered sequence of segment specifications, which can then be modified by a user. In an embodiment, the system and method can occur with the following steps (the order of which can change or steps removed): the regions where the goals are located, the level of force, the duration, the style, and a generalized intensity level. In an embodiment, a stroke segment is defined as a sequence of interaction goals. In an embodiment, a stroke specification is defined as a sequence of interaction goals. In an embodiment, the plan step has the user make modifications to the protocol before the protocol is executed to select or generate final interaction goal sets. Modifications would include modifications to any of the individual metadata associated with a stroke specification, or elimination of entire stroke segments based on users preferences. In an embodiment, once modification(s) are made and finalized, the system iterates over each of the stroke segments and either selects a matching prerecorded stroke segment from a database or generates a new stroke segment through an online automated strike segment generation process, using the metadata as input. Additional goals are inserted in between the selected or generated stroke segments to produce one continuous trajectory for execution. This output is referred to as the interaction plan or the plan. In an embodiment, the execute step involves the user beginning the execution process through an indicator in the user interface (UI). The interaction plan is executed by collecting the parameters associated with the requirements on the UI and sending individual goals to the controller at the appropriate time indicated with each goal. During execution, the user has additional controls to either modify parameters of the body mapping, including to adjust for poor alignment and/or to modify the future segments of the plan. In an embodiment, body model adjustments do not alter the individual goals, but do modify the way that the individual goals are transformed by the controller at time of execution. Adjustments to the plan, e.g., skipping, repeating, extending and/or shortening segments within the plan, adjust the execution of the plan by making adjustments to the remaining unexecuted portion of the plan. In embodiments, the way in which the protocols are defined allows a mixing of different protocols together simultaneously and sequentially while also able to maintain continuity and achievability for the underlying robotic goals. FIG.3shows a layout of a body mapping. InFIG.3, multiple regions are defined along the illustrated body from the neck to the upper thigh. This constitutes a basic set of regions for a user lying face down where a stationary set of arms is mounted alongside the table, near the body's shoulder. Within these regions, segments of trajectories (q1, q2, q3, etc.) are displayed within the regions, having time sequences of goals (qi.sub.0 to qi.sub.n for segment n). These segments are essentially “chained” together via interpolation along the body's surface to generate a final manipulation plan. In an embodiment, the body mapping is defined by a set of regions including: the neck, the upper back, the mid back, the lower back, the hips, and the upper thigh. In a further embodiment, regions for the arms and legs are included as well, covering the whole body. In an alternate embodiment, different regions can be defined for the body mapping. In an embodiment, the body mapping includes specification of goals that are on the skin and below the surface of the skin. Goals with position elements close to the skin generally correspond to low force goals used in more gentle styles of massage. Goals with positions below the skin are generated by the application of large forces, as the force component deforms the surface of the subject based on a bulk elastic behavior of the body tissue. In an embodiment, the body mapping is defined as a set of region boundaries offset from anatomical positions including an absolute or relative offset from the spine in order to target the erector spinae muscle group. In an embodiment, the body mapping is used to modify the plan based on model characteristics. Before the execution of the plan, the user has the ability to select the different regions of the body mapping to indicate a desire for more/less force and total duration of time spent within that region. The different segments of the plan can be then modified to either scale or cap their forces, increase or decrease their duration (by decreasing or increasing velocity), or eliminating the segments entirely. Alternatively, an automatic selection can be preset and can occur according to the present plan. In an embodiment, the body mapping is defined relative to body landmarks, including: the spine, the iliac crest, the hips, the T-3 spinous processes, the clavicle, the inferior angle of the scapula, the C-7 vertebra, the T-3 vertebra, the T-7 vertebra, the L-4 vertebra, the erector spinae group, the trapezius, the rhomboids, and the nuchal furrow. In an embodiment, the body mapping relates the body landmarks through a geometric transform. In an embodiment, the geometric transform is a three dimensional projective transform accounting for a uniform scaling of the distance between the body landmarks to account for transforming goals from one target subject to another. In an embodiment, the body mapping utilizes the detected anatomical skeletal joint positions along with detected body shape to relate goals from one individual subject to another. In an embodiment, the body mapping is performed in an autonomous manner. The perception system detects the body of the subject, segments the points associated with the subject into the subject point cloud. The perception system detects the pose of the subject, including the configuration of the subject's skeletal joints. The human body is not only a soft object, but its surface topology is complex, with many folds and features, that can be challenging when trying to predict and treat. The various control system embodiments described herein are able to manage progressing along this complex surface, and the body mapping system embodiments described herein allow for better adaptation to a variety of features and situations. Such features and situations can include excessive folding of clothing, various body types, uneven positioning of the body, disability of the body requiring an unconventional positioning, heightened sensitivities of specific body regions or types of contact by the touch contact or end effector, among others. FIG.4shows a robotic system's vision components situated above and in front of the robot. These components are a way the system senses the body to be manipulated, providing the data that is resolved into a model of the body. Several sensors410,412,418,414are arranged above the table430and arranged such that when their data is combined they have both a more complete and more validated view of the body. These sensors can be configured to generate thermographic imagery, visible light imagery, infrared imagery, and 3D range sensing. The robot arm440is shown attached to the table, and the robot manipulator's end effector tool (450) is at one end of the arm, and at the other end, the robot is attached to the table430. FIG.5shows a robotic system with a body550as the object being targeted for manipulation. The robot arm540is shown in contact with the body550, and the other end of the arm is shown attached to the table530. Also depicted are sensors510,512,514mounted above the table530such that the sensor's frustums provide a redundant and comprehensive view of the body550on the table530. In an embodiment, the interaction protocol contains a manipulation protocol that defines manipulation goals. The manipulation goals are generated by an interaction teaching system. In an embodiment, the interaction teaching system digitizes the manual manipulation of a robot arm that includes a handle in the end effector assembly. The digitization records the position, orientation, and force being applied to a surface by the touch point. It is assumed that the contact force is purely force based, as there is no rigid connection with the touch point and the subject that would allow for torque to be transferred, however since the force/torque sensor is mounted remotely from the point of contact (at the robot wrist), there will be a measured torque moment. The torque moment of the external force measurement is used to calculate the most likely contact point of the touch point. In an embodiment, the interaction teaching system perceives structural disturbances in the body. These structural disturbances are encoded along with the manipulation goals. The disturbances are noted as special cases for which progression management strategies are formulated. In a further embodiment, a progression management strategy involves the transient relaxation of pressure to allow the touch point to move over the region containing a similar structural disturbance. In an embodiment, the interaction teaching system utilizes manual manipulation to encode complex maneuvers around body parts. When these maneuvers are played back, the motion dynamics are scaled spatially from the source body in the recording to the target body being played back. In an embodiment, the robot itself is a recording mechanism that is able to digitize and replicate or “play back” the complex motions and intentions of a human operator, still adapting using the body mapping and control embodiments described herein to the specific constituency and state of a body that is different from the state of a body on which the digital replication or recording was made. In an embodiment, the overlay effects include modulation based on an auditory signal, music, or bass. Embodiments of the robot control include a computer or processor controlled system in which programmable actions or steps are coded via computer software program and used to tell or control the movements of the robot control. Embodiments of the programmable instructions to control the robot or robot arm or robot arm with an end effector can be effected by a predefined set of instructions, a machine learning set of instructions in which the system receives feedback from the sensors of the robot to modify pressure, frequency of touch, and other characteristics (e.g., cold, warmth, etc.). Features of the various embodiments of the above-identified system and method described herein can be modeled and/or effected and/or controlled by a general computer, special purpose computer, a processor, and a smart device having a processor. Embodiments of the method instructions can be stored on a computer-readable medium, the medium being virtual or hardware or portable or in the cloud/networked, having instructions thereon which are readable or can be made to be readable by a computer or processor so that the computer software instructions can be executed. The various embodiments described herein, and those equivalents thereto, can be used for a variety of nonanalogous objects, e.g., human body, animal body, soft body having deformable characteristics, a nonhomogenous body having soft and hard features. The various embodiments described herein, and those equivalents thereto, can be used for massage applications, sensing applications, modeling applications, and others. The modifications listed herein and other modifications can be made by those in the art without departing from the ambit of the invention. Although the invention has been described above with reference to specific embodiments, the invention is not limited to the above embodiments and the specific configurations shown in the drawings. For example, some components shown can be combined with each other as one embodiment, and/or a component can be divided into several subcomponents, and/or any other known or available component can be added. The processes are not limited to those shown in the examples. Those skilled in the art will appreciate that the invention can be implemented in other ways without departing from the substantive features of the invention. For example, features and embodiments described above can be combined with and without each other. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive. Other embodiments can be utilized and derived therefrom, such that structural and logical substitutions and changes can be made without departing from the scope of this disclosure. This Specification, therefore, is not to be taken in a limiting sense, along with the full range of equivalents to which such claims are entitled. Such embodiments of the inventive subject matter can be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose can be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations and/or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of ordinary skill in the art upon reviewing the above description. | 26,267 |
11858145 | DESCRIPTION OF EMBODIMENTS Reference will now be made in detail to various embodiments of the subject matter, examples of which are illustrated in the accompanying drawings. While various embodiments are discussed herein, it will be understood that they are not intended to limit to these embodiments. On the contrary, the presented embodiments are intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope the various embodiments as defined by the appended claims. Furthermore, in this Description of Embodiments, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present subject matter. However, embodiments may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the described embodiments. Overview of Discussion A device which can operate via remote controlled instruction, autonomously, or some combination thereof is described. The device is robotic and may be referred to as a “robot” or as a “robotic device,” and includes an auger-based drive system which facilitates the movement and/or operation of the device in relation to a portion of piled granular material in a bulk store, such as a grain bin. More particularly, because of the augers in the auger-based drive system, the device can operate and maneuver upon or beneath piled granular material. Additionally, and advantageously, augers of the auger-based drive system move and disrupt piled granular material as a consequence of the movement of the device. A bulk store is the place where granular material is piled for bulk storage. Although a grain bin is frequently used herein as an example of a bulk store, nearly any bulk store which is large enough for a human to access and work inside or upon the stored granular material is a candidate for operation of the device described herein. Accordingly, it should be appreciated that other large bulk stores are also suitable bulk stores for use of the described device in relation to piled granular material in many of the manners described herein. Some examples of other large bulk stores include, but not limited to: containers (e.g., railcars, semi-trailers, barges, ships, and the like) for transport/storage of granular material, buildings (e.g., silos) for storage of granular material, and open storage piles of granular material. Bulk stored granular material can present many safety concerns for humans. For example, bulk stores are often hot, dusty, poorly lit, and generally inhospitable work environments for humans. Additionally, entrapments can take place when a farmer or worker is in a bin and bulk stored material, such as grain, slides onto or engulfs the person. Entrapments can happen because a slope angle of the piled granular material (e.g., grain) is at a critical angle which may slide when disturbed by the person else when may slide when extraction augers disturb the bulk stored granular material. As one example, steep walls of grain can avalanche onto a farmer/worker trying to mitigate problems in a grain bin, inspect the stored grain, or agitate the grain to improve the outflow. Sometimes a crust layer can form on the surface of a pile of grain. Additionally, sometimes a crust layer can form over a void in a pile of grain and when a farmer/worker walks across it or tries to break it with force it may collapse. This type of crust is called a grain bridge, and the grain bridge can collapse and entrap a person who walks on it. As this grain bridge and/or the size of the void below it may be invisible to the human eye, it can present an unknown danger to a farmer/worker. As will be discussed, many of these and other safety concerns can be reduced or eliminated through use of the device and techniques/methods described herein. Among other things, the device described herein can be used to address managing the quality of bulk stored granular material (e.g., grain in a bin) through tasks like, but not limited to: inspections of the bulk stored granular material, leveling of the bulk stored granular material, agitating of the bulk stored granular to prevent/reduce spoilage, dispersing of the bulk stored granular material while it is being loaded into the bulk store, feeding a sweep auger or other collection device which removes the bulk stored granular material from the bulk store, and/or lowering the slope angles of the granular material in a partially emptied bulk store. In short, the device can accomplish numerous tasks which when done by the device preclude the need for humans to enter a bulk store, or else make it safer when it is necessary for humans to enter a bulk store. In various embodiments, these tasks can be carried out by the device under remote-control of the device by an operator located outside the bulk store, may be carried out in a partially fashion by the device, and/or may be carried out by the device in fully automated fashion. Discussion begins with a description of notation and nomenclature. Discussion then shifts to description of some block diagrams of example components of some examples of a device which moves about and/or operates in relation to a bulk stored pile of granular material. A variety of sensors and payloads which may be included with and/or coupled with the device are described. Numerous example views of the exterior of a device are presented and described, to include description of the auger-based drive system of the device. Several systems for remote-controlled semi-autonomous, and autonomous operation of the device are described. Additionally, systems and techniques for storing information from the device and/or providing information and/or instructions to the device are described. An example bulk store for granular material is then depicted and described in conjunction with operation of the device in relation to piled granular material in the bulk store. Several methods for delivery, by the device, of probes in a bulk store are discussed. Operation of the device and components thereof, to include some sensors and/or payloads of the device, are discussed in conjunction with description of an example method of bulk store leveling and in conjunction with a method of surface management of piled grain. Notation and Nomenclature Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processes, modules and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, module, or the like, is conceived to be one or more self-consistent procedures or instructions leading to a desired result. The procedures are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in an electronic device/component. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the description of embodiments, discussions utilizing terms such as “controlling,” “obtaining,” “satisfying,” “failing to satisfy,” “traversing,” “inciting,” “satisfying,” “ceasing traversal,” “continuing traversal,” “capturing,” “sensing,” “collecting,” “directing,” and “determining,” “communicating,” “receiving,” “receiving instructions,” “receiving data.” “sending,” “relaying,” “providing access,” “deliver,” “deposit,” “place,” and “communicatively coupling,” or the like, refer to the actions and processes of an electronic device or component such as (and not limited to): a host processor, a sensor processing unit, a sensor processor, a digital signal processor or other processor, a memory, a sensor (e.g., a temperature sensor, motion sensor, etc.), a computer, a remote controller, a device which moves about and/or operates in relation to a portion of piled granular material, some combination thereof, or the like. The electronic device/component manipulates and transforms data represented as physical (electronic and/or magnetic) quantities within the registers and/or memories into other data similarly represented as physical quantities within memories and/or registers or other such information storage, transmission, processing, and/or display components. Embodiments described herein may be discussed in the general context of processor-executable instructions residing on some form of non-transitory processor-readable medium, such as program modules or logic, executed by one or more computers, processors, or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments. In the figures, a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Also, the example electronic device(s) described herein may include components other than those shown, including well-known components. The techniques described herein may be implemented in hardware, or a combination of hardware with firmware and/or software, unless specifically described as being implemented in a specific manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a non-transitory computer/processor-readable storage medium comprising computer/processor-readable instructions that, when executed, cause a processor and/or other components of a computer, computer system, or electronic device to perform one or more of the methods and/or actions of a method described herein. The non-transitory computer/processor-readable storage medium may form part of a computer program product, which may include packaging materials. The non-transitory processor-readable storage medium (also referred to as a non-transitory computer-readable storage medium) may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, other known storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or other processor. The various illustrative logical blocks, modules, circuits and instructions described in connection with the embodiments disclosed herein may be executed by one or more processors, such as host processor(s) or core(s) thereof, digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (ASIPs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured as described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a plurality of microprocessors, one or more microprocessors in conjunction with an ASIC or DSP, or any other such configuration or suitable combination of processors. Example Block Diagrams of a Device which Moves About and/or Operates in Relation to a Pile of Granular Material FIG.1shows an example block diagram of some aspects of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. As previously discussed, device100may be referred to a robot and/or robotic device, and device100may carry out some or all of its functions and operations based on stored instructions. As shown, example device100comprises a communications interface101, a host processor102, host memory103, an interface104, motor controllers105, and drive motors106. In some embodiments, device100may additionally include one or more of communications107, a camera(s)108, one or more sensors120, and/or one or more payloads140. Communications interface101may be any suitable bus or interface which facilitates communications among/between components of device100. Examples of communications interface101include a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), a universal asynchronous receiver/transmitter (UART) serial bus, a suitable advanced microcontroller bus architecture (AMBA) interface, an Inter-Integrated Circuit (I2C) bus, a serial digital input output (SDIO) bus, or other equivalent and may include a plurality of communications interfaces. The host processor102may, for example, be configured to perform the various computations and operations involved with the general function of device100(e.g., sending commands to move, steer, avoid obstacles, and operate/control the operation of sensors and/or payloads). Host processor102can be one or more microprocessors, central processing units (CPUs), DSPs, general purpose microprocessors, ASICs, ASIPs, FPGAs or other processors which run software programs or applications, which may be stored in host memory103, associated with the general functions and capabilities of device100. Host memory103may comprise programs, modules, applications, or other data for use by host processor102. In some embodiments, host memory103may also hold information that that is received from or provided to interface104, motor controller(s)105, communications107, camera(s)108, sensors120, and/or payloads140. Host memory103can be any suitable type of memory, including but not limited to electronic memory (e.g., read only memory (ROM), random access memory (RAM), or other electronic memory). Interface104is an external interface by which device100may receive input from an operator or instructions. Interface104is one or more of a wired or wireless transceiver which may provide connection to an external transmission source/recipient for receipt of instructions, data, or direction to device100or offload of data from device100. One example of an external transmission source/external recipient may be a base station to which device100communicates collected data or from which device100receives instructions or direction. Another example of an external transmission source/recipient is a handholdable remote-controller to which device100communicates collected data or from which device100receives instructions or direction. By way of example, and not of limitation, in various embodiments, interface104may comprise one or more of: a cellular transceiver, a wireless local area network transceiver (e.g., a transceiver compliant with one or more Institute of Electrical and Electronics Engineers (IEEE) 802.11 specifications for wireless local area network communication (e.g., WiFi)), a wireless personal area network transceiver (e.g., a transceiver compliant with one or more IEEE 802.15 specifications (or the like) for wireless personal area network communication), and a wired a serial transceiver (e.g., a universal serial bus for wired communication). Motor controller(s)105are mechanism(s), typically circuitry and/or logic, which operate under instruction from processor102to drive one or more drive motors106with electricity to govern/control the direction and/or speed of rotation of the drive motor(s)106and/or or other mechanism of movement to which the drive motor(s)106are coupled (such as augers). Motor controller(s)105may be integrated with or separate from drive motor(s)106 Drive motor(s)106are electric motors which receive electrical input from motor controller(s)105and turn a shaft in a direction and/or speed responsive to the electrical input. In some embodiments, drive motors106may be coupled directly to a mechanical means of drive motivation and steering—such as one or more augers. In some embodiments, drive motors106may be coupled indirectly, such as via a gearing or a transmission, to a mechanical means of drive motivation and steering—such as one or more augers. Communications107, when included, may comprise external interfaces in addition to those provided by interface104. Communications107may facilitate wired and/or wireless communication with devices external to and in some instances remote (e.g., many feet or even many miles away) from device100. Communications protocols may include those used by interface104as well as others. Some examples include, but are not limited to: WiFi, LoRaWAN (e.g., long range wireless area network communications on the license-free sub-gigahertz radio frequency bands), IEEE 802.15.4-2003 standard derived communications (e.g., xBee), IEEE 802.15.4 based or variant personal area network (e.g., Bluetooth, Bluetooth Low Energy, etc.), cellular, and connectionless wireless peer-to-peer communications (e.g., ESP-NOW). In various aspects, communications107may be used for data collection/transmission, reporting of autonomous interactions of device100, and/or user interface and/or operator interface with device100. Camera(s)108may comprise, without limitation: any type of optical or infrared image sensor for capturing still or moving images. Some examples of suitable cameras include charge-coupled device (CCD) sensor cameras, metal-oxide semiconductor (MOS) sensor cameras, and other digital electronic cameras. Captured images may be utilized by device100for purposes such as navigation and decision making, may be stored, and/or may be transmitted to devices external to device100. In some embodiments, camera(s)108facilitate wayfinding for device100when operating autonomously or semi-autonomously. In some embodiments, camera(s)108facilitate a remote view for an operator when device100is manually driven by a human user via a remote controller or computer system communicatively coupled with device100. In some embodiments, an infrared camera108is used to find hotspots of grain to mix or agitate with device100(to reduce the heat of the hotspot). In some embodiments, computer vision is used by device100to make autonomous decisions based on inputs to processor102from camera(s)108. FIG.2shows block diagram of a collection of sensors120, any or all of which may be incorporated device100ofFIG.1, in accordance with various embodiments. Sensors120illustrate a non-limiting selection of sensors, which include: motion sensor(s)220, GNSS (Global Navigation Satellite System) receiver230, ultrasonic transducer231, LIDAR (light detection and ranging/laser imaging, detection, and ranging)232, temperature sensor233, moisture sensor234, optical sensor235, infrared sensor236(which may be a receiver or an emitter/receiver), electrostatic sensor237, and electrochemical sensor238. In some embodiments, one or more microphones (not depicted), may be included as sensors. For example, an array of microphones may be used with a beamforming technique to locate the directional source of a sound, such as falling granular material being poured, conveyed, streamed, or augured into a bulk store. Some embodiments may additionally, or alternatively, include other sensors not described. In general, individual sensors120operate to detect motion, position, timing, and/or some aspect of environmental context (e.g., temperature, atmospheric humidity, moisture of a sample or probed portion of granular material, distance to an object, shape of an object, solidity of a material, light or acoustic reflectivity, ambient charge, atmospheric pressure, presence of certain chemical(s), noise/sound, etc.). For example, in an embodiment where the piled granular material is grain, many of sensors120are used to determine the state of the grain (e.g., temperature, moisture, electrostatic charge, etc.). In some embodiments, one or more sensors120are used for fall detection, orientation, and to aid in autonomous direction of movement of device100. For example, by detecting temperature of grain, device100may determine hot spots which need to be mixed by traversal with device100or by other means. Similarly, by detecting moisture of grain, device100may determine moist spots which need to be mixed by traversal with device100or by other means. By detecting an electrostatic and/or electrochemical aspect of the atmosphere in a grain bin, a level of dust or other particulates and/or likelihood of an explosion may be detected in order to gauge safety for a human and/or safety for operating device100. Some embodiments may, for example, comprise one or more motion sensors220. For example, an embodiment with a gyroscope221, an accelerometer222, and a magnetometer223or other compass technology, which each provide a measurement along three axes that are orthogonal relative to each other, may be referred to as a 9-axis device. In another embodiment three-axis accelerometer222and a three-axis gyroscope221may be used to form a6-axis device. Other embodiments may, for example, comprise an accelerometer222, gyroscope221, compass, and pressure sensor, and may be referred to as a 10-axis device. Other embodiments may not include all these motions sensors or may provide measurements along one or more axes. In some embodiments, motion sensors220may be utilized to determine the orientation of device100, the angle of slope or inclination of a surface upon which device100operates, the velocity of device100, and/or the acceleration of device100. In various embodiments, measurements from motion sensors220may be utilized by host processor102to measure direction and distance of travel and may operate as an inertial navigation system (INS) suitable for controlling and/or monitoring maneuvering of device100in a bulk store (e.g., within a grain bin). In some embodiments, motion sensors220may be used for fall detection. In some embodiments, motions sensor(s)220may be used to detect vibrations in the granular material proximate to device100. FIG.3shows block diagram of a collection of payloads140, any or all of which may be incorporated device100ofFIG.1, in accordance with various embodiments. Payloads140illustrate a non-limiting selection of payloads, which include: ultraviolet germicidal341, sample gatherer342, percussive, probe/sensor delivery344, air dryer345, drill346, sprayer347, lights348, and/or ripper349. Ultraviolet germicidal payload341, when included, emits ultraviolet light to kill germs by irradiating in the proximity of device100. Sample gatherer payload342, when included, provides one or more containers or bays for gathering one or more samples of granular material from a pile of granular material upon which device100operates. Percussive payload343, when included, operates to vibrate, or percussively impact piled granular material touching or in the proximity of device100. Probe/sensor delivery payload344, when included, operates to insert one or more probes or sensors into piled granular material upon which device100operates and/or to position one or more probes onto piled granular material upon which device100operates. Air dryer payload345, when included, provides a fan and/or heater for drying piled granular material proximate to device100. Drill payload346, when included, operates to bore into and/or sample piled granular material and/or break up crusts or aggregations of piled granular material proximate to device100. Sprayer payload347, when included, operates to spray fungicide, insecticide, or other liquid or powdered treatments onto piled granular material proximate device100. Lights payload348, when included, emit optical and/or infrared illumination in the proximity of device100. Ripper payload349, when included, comprises one or more blades, tines, or the like and is used to rip into, agitate, and/or break up crusts or chunks of aggregated granular material proximate device100. It should be appreciated that various payloads may be delivered, where delivery includes leaving or expelling the payload or a portion thereof at a designated location. For example, delivery can include leaving/installing a probe or sensor. Delivery may also include spraying or spreading a substance such as, but not limited to: a coolant, a flame retardant, an insecticide, a fungicide, or other liquid, gas, or powder. In various embodiments, one or some combination of payloads140may be included in a payload bay of device100. In some embodiments, the payload bay is fixed in place. In some embodiments, the payload bay may be removably coupled to device100to facilitate swapping it for another payload bay to quickly reconfigure device100with various different payloads. Example External Views of a Device which Moves About and/or Operates in Relation to a Pile of Granular Material FIGS.4A-1,4A-2, and4A-3illustrate front elevational views of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. With reference toFIG.4A-1, device100includes a body401, motors106(106-1and106-2), transmissions402(402-1and402-2), and augers403-1and403-2. In the illustrated embodiment of device100, a pair of bilateral augers403is utilized. In some embodiments, a drive motor106may be coupled to an auger403(such as to the end of an auger403) in a manner that eliminates the need of a transmission402between the drive motor106and the auger403. In the depicted embodiments, the transmission is located near the middle of each auger403, thus bifurcating each auger into two portions. InFIG.4A-1, the front portion403-1A of auger403-1is visible, as is the front portion403-2A of auger403-2. In typical operation, augers343sink at least partially into the piled granular material and thrust against it as they rotate. The direction and speed of rotation of the augers403determines the movement fore, aft, left, right, turning left, and/or turning right of device100. In this manner, in various embodiments, device100can move atop a pile of granular material, can move beneath a pile of granular material (i.e., submerged in it), and can move to the surface after being submerged in a pile of granular material. In some embodiments, device100includes one or more payloads140. For example, lights payloads348(348-1and348-2) are included to provide illumination. In some embodiments, device100may additionally or alternatively include a payload bay440which may be fixed to device100or removably couplable with device100. The payload bay440may provide a housing for one or more of the payloads140discussed herein and/or for other payloads. As one example, payload bay440may include sample gatherer payload342(show in the closed, non-sample gathering position as342A). In some embodiments, one or more cameras108are included and coupled with body401. In some embodiments, one or more sensors120are included and coupled with body401in a manner which provides access to the external environment of device100. For example, one or more of ultrasonic transducer231, LIDAR232, temperature sensor233, moisture sensor234, optical sensor235, infrared sensor236, electrostatic sensor237, and electrochemical sensor238may be included in a manner which provides sensor access to the operating environment of device100. Referring now toFIG.4A-2, device100is illustrated with sample gatherer payload342in an open, sample gathering position342B, to scoop up a sample of granular material as device100moves forward with sample gatherer payload open and submerged into the piled granular material upon which device100operates. Referring now toFIG.4A-3, device100is illustrated without payload bay440. This illustrates a configuration of device100in which payload bay440has been removed or else device100is not configured to support a payload bay440. FIGS.4B-1and4B-2illustrate rear elevational views of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. With reference toFIG.4B-1, the rear portion403-1B of auger403-1is visible, as is the rear portion403-2B of auger403-2. With reference toFIG.4B-2, device100is illustrated without payload bay440. This illustrates a configuration of device100in which payload bay440has been removed or else device100is not configured to support a payload bay440. FIGS.4C-1and4C-2illustrate right elevational views of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. With reference toFIG.4C-1, the full span of auger403-2is visible, including front portion403-2A and rear portion403-2B, as is the drive motor106-2and transmission402-2which drive auger403-2. This lateral side of the auger-based drive system of device100comprises drive motor106-2, transmission402-2, and auger403-2. As has been discussed, other embodiments may directly drive the auger with the drive motor, thus eliminating the transmission from the auger-based drive system. With reference toFIG.4C-2, device100is illustrated without payload bay440. This illustrates a configuration of device100in which payload bay440has been removed or else device100is not configured to support a payload bay440. FIGS.4D-1and4D-2illustrate left elevational views of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. With reference toFIG.4D-1, the full span of auger403-1is visible, including front portion403-1A and rear portion403-1B, as is the drive motor106-1and transmission402-1which drive auger403-1. This lateral side of the auger-based drive system of device100comprises drive motor106-1, transmission402-1, and auger403-1. As has been discussed, other embodiments may directly drive the auger with the drive motor, thus eliminating the transmission from the auger-based drive system. With reference toFIG.4D-2, device100is illustrated without payload bay440. This illustrates a configuration of device100in which payload bay440has been removed or else device100is not configured to support a payload bay440. FIGS.4E-1and4E-2illustrate bottom plan views of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. With reference toFIG.4E-1a bottom plan view of device100is shown with a payload bay440coupled with device100. As can be seen inFIG.4E-1, drive auger403-1and drive auger403-2are arranged in a bi-lateral fashion, and have fighting wound in opposite directions from each other. Thus, the bi-lateral driver augers403-1and403-2may be referred to as “opposing screw” drive augers. Propulsion is through direct interaction with the granular material in which device100operates and can be forward, reverse, sideways, and turning. With reference toFIG.4E-2, device100is illustrated in bottom plan view without payload bay440. This illustrates a configuration of device100in which payload bay440has been removed or else device100is not configured to support a payload bay440. FIG.4Fillustrates a top plan view of the exterior of a device100which moves about and/or operates in relation to a pile of granular material along with a chart475illustrating directional movements, in accordance with various embodiments. Chart475shows some examples of rotations of augers403-1and403-2utilized to implement movement of device100in the directions indicated by the arrows in the chart. The rotations and movement directions in chart475are in relation to the view of device100shown inFIG.4F. Although not depicted, in some embodiments, device100may be operated to move laterally to one side or the other. FIG.4Gillustrates an upper front right perspective view of the exterior of a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. Example Systems FIG.5illustrates some example embodiments of a bulk store slope adjustment system500, in accordance with various embodiments. System500includes at least device100when operating autonomously. In some embodiments, system500may include device100and a remotely located remote controller501which is communicatively coupled by wireline510or wirelessly520with device100(e.g., to interface104) to send instructions or data and/or to receive information or data collected by device100(e.g., from operation of device100and/or from sensor(s)120and/or payload(s)140). Remote controller501may be like a handholdable remote controller for a video game, or a remotely controlled model car or model airplane. In some embodiments, remote controller may have a display screen for visual display of textual information or still/video images received from device100. In some embodiments, remote controller501is utilized by an operator to maneuver device100and/or to operate sensor(s)120and/or payload(s)140. In some embodiments, system500may include device100and a remotely located computer system506which is communicatively coupled wirelessly580with device100to send instructions or data and/or to receive/access information or data collected by device100(e.g., from operation of device100and/or from sensor(s)120and/or payload(s)140). In some embodiments, system500may include device100along with a communicatively coupled remote controller501and a communicatively coupled remotely located computer system506. It should be appreciated that wireless communications520and580may be peer-to-peer, over a wide area network, or by other protocols. FIG.6illustrates some example embodiments of a bulk store slope adjustment system600, in accordance with various embodiments. In some embodiments, system600includes device100in wireless communicative coupling650(e.g., via the Internet) with one or more of cloud-based602storage603processing604. In some embodiments, cloud-based602storage603is used to store data collected by device100. In some embodiments, cloud-based processing604is used to process data collected by device100and/or to assist in autonomous decision making based on collected day. In some embodiments, system600additionally includes a remotely located computer605, communicatively coupled to cloud602(e.g., via the internet) either wirelessly670or by wireline660. In this fashion, remotely located computer605may access data from device100which has been uploaded to storage603and/or may communicate with or access device100by relay through processing/computer system605or cloud602. In some embodiments, system600may additionally include one or more components (remote controller501and/or remotely located computer system506) which were described inFIG.5. In some embodiments, one or more of remote controller501and remote computer system506may be communicatively coupled (e.g.,630/640) with cloud602for transmission and/or receipt of information related to device100. Example Bulk Store and Example Operations to Adjust Slope of a Portion of Piled Granular Material FIG.7Aillustrates an example bulk store700for granular material, in accordance with various embodiments. For purposes of example, and not of limitation, bulk store700is depicted as a grain bin which is used to bulk store grain (e.g., corn, wheat, soybeans, or other grain). Bulk store700includes an access door705through which device100may be inserted into and/or removed from bulk store700. Bulk store also includes a top loading portal701through which bulk grain may be deposited, by an auger or other loading system (not depicted), and then fall into bulk store700to form a pile of granular material (e.g., grain710shown inFIG.7B). Section lines depict location of direction of Section A-A and Section B-B which will be illustrated in other figures. FIG.7Billustrates a side sectional view A-A of an example bulk store700for granular material which shows a device100moving about and/or operating in relation to a portion (portion720as shown inFIG.7C) of a surface711of piled granular material (e.g., grain710) in the bulk store700, in accordance with various embodiments. Because some of grain710has been removed from the bottom of bulk store700, a cone shaped concavity has been created with a slope of approximately 20 degrees down from the walls to the center of bulk store700in the portion of piled granular material where device100is operating. The slope of 20 degrees is used for example purposes only. The maximum angle of the downward slope from the sides to the middle is dictated by the angle of repose, which differs for different granular materials and may differ for a particular granular material based on environmental physical characteristics (such as moisture) of the granular material. When a granular material is steeply sloped and near the angle of repose, it can be easily triggered to slide and cause entrapment of a person. When the slope of a granular material exceeds its angle of repose, it slides (like an avalanche). Additionally, when grain710becomes steeply sloped as illustrated during removal, it means that much of the removed grain is coming out from the center of the bin, rather than a mixture of grain from all areas of the bin. Leveling or reduction of slope, of an inwardly sloped pile, reduces risk of a slide and distributes grain from the high sloped edges to prevent/reduce spoilage of those portions of the grain. Due to the friction of augers403against grain710and the agitation of augers403caused to grain710when device100traverses a portion of piled granular material (e.g., portion720of grain710), viscosity of the piled granular material is disrupted. The disruption of viscosity lowers the angle of repose and, because of the slope being caused to exceed the angle of repose, incites sediment gravity flow in the portion of piled granular material down the slope. Additionally, rotational movement of the augers also displaces grain710and can be used to auger the grain in a desired direction or expel it such that gravity carries it down slope. Either or both of these actions can be used to disperse grain710and/or to adjust (reduce) the slope of portion720. FIG.7Cillustrates a top sectional view B-B of an example bulk store700for granular material which shows a device100moving about and/or operating in relation to a portion720of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments. FIG.7Dillustrates a top sectional view B-B of an example bulk store700for granular material which shows pattern730for moving a device100about and/or operating in relation to a portion720of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments. In some embodiments, pattern730may be manually driven by a remotely located operator via remote controller501(for example). In some embodiments, pattern730may be autonomously driven by device100. In some embodiments, pattern730may be initiated due to a first measurement of the angle of slope of grain710in portion720satisfying a first condition such as being beyond an acceptable threshold angle (e.g., 10 degrees of slope). Pattern730or other patterns of traversal of portion720may be repeatedly driven until a follow-on measurement of the angle of slope of grain710in portion720meets a second condition (e.g., falls below the threshold angle or falls below some other angle such as 7 degrees). In this manner a portion (e.g., portion720) or all of the grain in bulk store700can have its slope adjusted downward, closer to level. FIG.7Eillustrates a top sectional view B-B of an example bulk store700for granular material which shows pattern731for moving a device100about and/or operating in relation to a portion720of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments. In some embodiments, pattern731may be manually driven by a remotely located operator via remote controller501(for example). In some embodiments, pattern731may be autonomously driven by device100. In some embodiments, pattern731may be initiated due to a first measurement of the angle of slope of grain710in portion720satisfying a first condition such as being beyond an acceptable threshold angle (e.g., 10 degrees of slope). Pattern731or other pattern(s) of traversal of portion720may be repeatedly driven until a follow-on measurement of the angle of slope of grain710in portion720meets a second condition (e.g., falls below the threshold angle or falls below some other angle such as 7 degrees). In this manner a portion (e.g., portion720) or all of the grain in bulk store700can have its slope adjusted downward, closer to level. FIG.7Fillustrates a top sectional view B-B of an example bulk store700for granular material which shows pattern732for moving a device100about and/or operating in relation to a portion720of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments. In some embodiments, pattern732may be manually driven by a remotely located operator via remote controller501(for example). In some embodiments, pattern732may be autonomously driven by device100. In some embodiments, pattern732may be initiated due to a first measurement of the angle of slope of grain710in portion720satisfying a first condition such as being beyond an acceptable threshold angle (e.g., 10 degrees of slope). Pattern731or other pattern(s) of traversal of portion720may be repeatedly driven until a follow-on measurement of the angle of slope of grain710in portion720meets a second condition (e.g., falls below the threshold angle or falls below some other angle such as 7 degrees). In this manner a portion (e.g., portion720) or all of the grain in bulk store700can have its slope adjusted downward, closer to level. InFIG.7F, pattern732is confined to portion720. In such an embodiment, only this portion may be leveled by device100, or else device100may work its way around bulk store700portion by portion by portion, leveling each portion completely or incrementally before moving to the next portion. FIGS.7D-7Eillustrate only three example patterns, many other patterns are possible and anticipated including, but not limited to: grid patterns, circular patterns, symmetric patterns, unsymmetrical patterns, spiral patterns, random/chaos motion (e.g., patternless), patterns/paths that are dynamically determined based on the slope and changes of the slope, and patterns which are cooperatively executed by two or more devices100working in communication with one another. Any of the patterns executed by device100may be stored in host memory103for automated execution by processor102controlling the movements of device100to traverse the pattern. Similarly, patternless or dynamic movement may be executed by processor102in an automated fashion by controlling the movements of device100, such as to seek out portions with a slope which satisfies a first condition and traverse them until the slope satisfies the second condition. In some embodiments, patterns or traversal operations may similarly be utilized to break up and distribute grain710to assist it in drying out, to prevent a crust from forming, to inspect grain, to push grain towards a sweep auger or other uptake, and/or to diminish spoilage. In some embodiments, patterns or traversal operations may similarly be utilized to level peaks which form in grain or other piled granular material due to the method and/or location in which it is loaded into a bulk store. Such leveling better utilizes available storage space, reduces crusts or pipe formation, reduces hotspots, and/or more evenly distributes granular material of differing moisture contents. FIG.7Gillustrates a side sectional view A-A of an example bulk store700for granular material710which shows a device100moving about and/or operating in relation to a portion (e.g., portion720) of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments.FIG.7Gis similar toFIG.7Bexcept that the slope has been downwardly adjusted from 20 degrees to approximately 13 degrees (as measured by device100) by traversal of the portion by device100. FIG.7Hillustrates a side sectional view A-A of an example bulk store700for granular material710which shows a device100moving about and/or operating in relation to a portion (e.g., portion720) of a surface711of piled granular material710in the bulk store700, in accordance with various embodiments.FIG.7His similar toFIG.7Gexcept that the slope has been further downwardly adjusted from 13 degrees to approximately 5 degrees (as measured by device100) by traversal of the portion by device100. Example Methods of Bulk Store Slope Adjustment Procedures of the methods illustrated by flow diagram800ofFIGS.8A-8Ewill be described with reference to elements and/or components of one or more ofFIGS.1-7H. It is appreciated that in some embodiments, the procedures may be performed in a different order than described in a flow diagram, that some of the described procedures may not be performed, and/or that one or more additional procedures to those described may be performed. Flow diagram800includes some procedures that, in various embodiments, are carried out by one or more processors (e.g., host processor102or any processor of device100or a computer or system to which device100is communicatively coupled) under the control of computer-readable and computer-executable instructions that are stored on non-transitory computer-readable storage media (e.g., host memory103, other internal memory of device100, or memory of a computer or system to which device100is communicatively coupled). It is further appreciated that one or more procedures described in flow diagram800may be implemented in hardware, or a combination of hardware with firmware and/or software. For purposes of example only, the device100ofFIGS.1-7His a robotic device which utilizes augers (403) to move and maneuver with respect to piled granular material, such as, but not limited to grain. Robot100will be described as operating on or in relation to piled granular material in a bulk store, such as, but not limited to grain in a grain bin. In some embodiments, the robot100is free of mechanical coupling with a structure (e.g., the bulk store) in which the piled granular material is contained. For example, in some embodiments, there is no tether or safety harness coupling the robot100to the grain storage bin and it operates autonomously or under wireless remote control. In some embodiments, robot100performs the method of flow diagram800completely autonomously. In some embodiments, robot100performs the method of flow diagram800semi-autonomously such as by measuring a slope of grain, sending the slope to an external computer system which then determines a pattern for robot100to autonomously execute when traversing the piled grain. In some embodiments, robot100performs the method of flow diagram800semi-autonomously such as by receiving a remotely measured slope of grain, then autonomously determining a pattern for robot100to autonomously execute when traversing the piled grain. FIGS.8A-8Eillustrate a flow diagram800of an example method of bulk store slope adjustment, in accordance with various embodiments. With reference toFIG.8A, at procedure810of flow diagram800, in various embodiments, a robot100which includes a processor102, a memory103, and an auger-based drive system (e.g., augers403), obtains a first measurement of an angle of slope of a portion of piled granular material in a bulk store, wherein the robot100comprises an auger-based drive system. With reference toFIGS.7A,7B, and7C, this can comprise a measure of the angle of slope of portion720of grain710in bin700. The angle can be measured and obtained autonomously by robot100or can be measured by a device external to robot100and then obtained by being communicated to or accessed by robot100. In an embodiment, where the angle of slope is measured by robot100, motion sensor(s)220may be used to measure the angle of robot100on a slope of portion720to approximate the angle of the slope. In some embodiment, procedure810may be skipped and an operator may simply direct robot100to begin traversal of a portion (e.g., portion720) of piled granular material. With continued reference toFIG.8A, at procedure820of flow diagram800, in various embodiments, in response to the first measurement satisfying a first condition, the robot100traverses the portion of piled granular material to incite sediment gravity flow in the portion of piled granular material by disruption of viscosity of the portion of piled granular material through agitation of the portion of piled granular material by auger rotation of the auger-based drive system. The traversal may be controlled by host processor102via control of the direction of rotation and/or the speed of rotation of augers403of robot100. Robot100may traverse the portion (e.g., portion720) of piled granular material (e.g., piled grain710) in a predetermined pattern, which may be a predetermined pattern of movement stored in host memory103of robot100. Robot100may traverse the portion (e.g., portion720) of piled granular material (e.g., piled grain710) in a patternless or random/chaos manner or by following dictates other than a pattern such as by dynamically seeking out areas of slope above a certain measure. In some embodiments, a pattern may be changed or altered based on information sensed by robot100. With continued reference toFIG.8A, at procedure830of flow diagram800, in various embodiments, robot100obtains a second measurement of the angle of slope of the portion of piled granular material. This second measurement is obtained after the robot has traversed the portion (e.g., portion720) following a pattern, for a predetermined period of time, or based on other criteria for re-measurement of the slope. The second angle measurement can be measured and obtained autonomously by robot100or can be measured by a device external to robot100and then obtained by being communicated to or accessed by robot100. With continued reference toFIG.8A, at procedure840of flow diagram800, in various embodiments, in response to the second measurement satisfying a second condition, robot100ceases traversal of the portion of piled granular material. In some embodiments, the first condition is related to a first angle and the second condition is related to a second angle. In some embodiments, where the first angle is the same as the second angle, the first condition may be met when the first measurement exceeds the angle, and the second measurement may be met when the second measurement falls below the angle. For example, the angle may be 10 degrees, and when the first measurement is 20 degrees, traversal will continue until the angle is adjusted to below 10 degrees. In some embodiments, where the first angle and the second angle are different, the first angle is larger than the second angle. For example, the first angle may be 10 degrees while the second angle is 5 degrees. In such an embodiment, when the first measurement is 20 degrees, traversal will continue until the angle meets the second condition (e.g., drops below 5 degrees). With reference toFIG.8B, at procedure850of flow diagram800, in various embodiments, in response to the second measurement failing to satisfy the second condition, the robot100continues traversal of the portion of piled granular material. For example, if the second condition specifies that the measurement of slope needs to be reduced to below 5 degrees, the robot would continue traversal of the portion of piled granular material in response to the second measurement being 13 degrees. With reference toFIG.8C, at procedure860of flow diagram800, in various embodiments, during traversal of the portion (e.g.,720) of piled granular material by robot100, a sensor120of robot100acts under instruction of host processor102to capture a measurement of a characteristic of the portion of piled granular. Some example characteristics include, but are not limited to, capturing a measurement of: temperature, humidity, moisture, gas composition, electrostatic nature, and/or electrochemical nature. A measured characteristic may also comprise an optical and/or infrared image. The captured measurement of a characteristic can be stored within memory103or transmitted from robot100. In some embodiments, the captured measurement of a characteristic is paired with a location of robot100at the time of capture of the measurement. Such paired data can be used to create a characteristic map of the piled granular material which is traversed by robot100. In some embodiments, the captured measurement(s) of characteristic(s) may be transmitted to a base station (506,605) communicatively coupled with robot100. The base station (506,605) is located remotely from the robot and may be configured to communicate the with robot100over the Internet, via a wide-area network, via a peer-to-peer communication, or by other means. Via such communications, the base station (506,605) may receive data collected by robot100(including motion sensor data) collected by the robot during the traversal of the portion of piled granular material. Additionally, or alternatively, via such communications, the base station (506,605) may relay instructions to robot100. In some embodiments, the captured measurement(s) of characteristic(s) may be transmitted to a cloud-based602storage603and/or processing604which is/are communicatively coupled with robot100. The cloud-based infrastructure602may be utilized to process data, store data, make data available to other devices (e.g., computer605), and/or relay information or instructions from other devices (e.g., computer605) to robot100. With reference toFIG.8D, at procedure870of flow diagram800, in various embodiments, a temperature sensor233, infrared sensor236, or infrared camera108of robot100is used to capture a temperature measurement of the portion of piled granular material during the traversal of the portion of piled granular material. In some embodiments, the captured measurement of a characteristic is paired with a location of robot100at the time of capture of the temperature measurement. Such paired data can be used to create a heat map of the piled granular material which is traversed by robot100. With reference toFIG.8E, at procedure880of flow diagram800, in various embodiments, robot100collects a sample from the portion of piled granular material during the traversal of the portion of piled granular material. For example, with reference toFIG.4A-2, processor102or a remotely located operator may direct a sample collection device, such as gatherer payload342, to open to collect a sample of grain at a particular location and to close after a sample is collected or a predetermined time period has elapsed. Delivery of Payloads in a Bulk Store A device100, such as a robot, may precisely deliver and retrieve payloads within a bulk store (e.g., bulk store700) for granular material. The payload may be any desired payload which can be carried by the device100, numerous of which have been discussed previously, and may include a sensor (e.g., a temperature sensor, a humidity sensor, an elevation sensor, or some combination of sensors) or a probe which includes one or more of these sensors and is configured to record and/or wirelessly communicate information measured by the sensors. In various embodiments, a probe may collect information about the granular material (grain) which proximally surrounds it (e.g., the temperature local to the probe). In various embodiments, for example, device100can operate via remote controlled instruction, autonomously, or some combination thereof. As discussed above, device100is robotic and may be referred to as a “robot” or as a “robotic device,” and includes an auger-based drive system which facilitates the movement and/or operation of the device in relation to a portion of piled granular material in a bulk store700, such as a grain bin. The robotic device can be equipped with a payload delivery system allowing the precise placing of a payload such as a probe, including location coordinates within the bulk store. In some embodiments, this location is marked and stored in the payload during delivery and or in the robotic device100upon delivery of the payload. For example, the robot maneuvers on the granular material with its auger driven propulsion and using an adaptable tool or a probe delivery module which may be carried in payload bay340(e.g., probe delivery payload344) or elsewhere on device100, delivers the probe, and marks the probe's location upon delivery/deposition onto the granular material. An adaptable tool can deliver a variety of probes, while a probe delivery module may be configured for delivering and/or retrieving a specific type of probe. One embodiment of a probe delivery payload344is illustrated inFIGS.9A-9E, it is appreciated that any suitable probe delivery payload may be similarly utilized and that the embodiment ofFIGS.9A-9Eis provided by way of example and not of limitation. FIG.9Aillustrates a top view of an example probe delivery payload344which may be coupled to and controlled by a device100which moves about and/or operates in relation to a pile of granular material, in accordance with various embodiments. FIG.9Billustrates a front view of an example probe delivery payload344which may be coupled to and controlled by a device100, in accordance with various embodiments. The rear view is substantially the same. FIG.9Cillustrates a bottom view of an example probe delivery payload344which may be coupled to and controlled by a device100, in accordance with various embodiments. A plurality of doors901(901-1,901-2,901-3,901-4,901-5,901-6,901-7,901-8,910-9) are depicted, but a greater or lesser number may be used in various embodiments. Each of the doors901may be independently opened by a device100, or a processor thereof, such as by actuating a solenoid which hold a particular door in a closed position. FIG.9Dillustrates a right side view of an example probe delivery payload344which may be coupled to and controlled by a device100, in accordance with various embodiments. The left side view is a mirror image thereof. FIG.9Eillustrates a right side view of an example probe delivery payload344which may be coupled to and controlled by a device100, in accordance with various embodiments. InFIG.9E, door901-1has been opened by a device100, freeing a payload910to be dropped via gravity. Payload910may be a probe which is left behind after it lands on a surface upon which the device100is operating. FIG.10illustrates a right elevational view of the exterior of a device100B which moves about and/or operates in relation to a pile of granular material710, in accordance with various embodiments. Device100B is similar to device100illustrated inFIG.4C-2, except that probe delivery payload344has been coupled to its rear and communicatively coupled to a host processor102which exerts control over which doors901to open and when to open them in order to precisely deliver a payload910. Several methods of payload delivery are described in conjunction with description ofFIGS.11,12, and13. It should be appreciated that although these methods are described in isolation for purposes of clarity, they may be used in various combinations with one another. For example, while probes are being delivered according to a predetermined pattern, a device100(e.g., device100B) may deliver an individual probe in a place which is not specified by the pattern in response to receiving remote controlled instructions to do so and/or in response to sensing specified criteria which satisfy a requirement for delivery of a probe. A pattern for probe delivery may be the same pattern (or a portion thereof) used to level a piled granular material in a bin or other store. For example, during the leveling probes may be dispensed at designated locations which may be manually selected, predetermined/preprogrammed, and/or in response to meeting of sensed criterial (e.g., one or some combination of location, temperature measured, air flow measured, moisture of granular material measured, etc.). That is, while leveling piled granular material, a device100B may encounter locations or criteria which dictate triggering of payload delivery. In this manner, payload delivery may, in some embodiments, occur coincident with other activities of device100B. FIG.11illustrates robot delivery of a payload910, which may be a probe, in a bulk store in a predetermined three-dimensional pattern as granular material such as grain is added to the bulk store700, according to various embodiments. Bulk store is shown as a three-dimensional side section view, similar to section A-A ofFIG.7B. Dashed discs1101,1102, and1103(shown inFIGS.11,12, and13) represent portions of different levels within a pile of piled granular material710. InFIG.11, device100B is illustrated delivering a plurality of probes910(e.g.,910-1through910-15) over a period of time as grain has been loaded through top loading portal701of bulk store700to form a pile710of granular material (e.g., a pile of grain). For example, at level1101, at the beginning of the loading of grain, device100B delivered probes910-1and910-2according to a specified and predetermined, spaced out pattern. At level1102, after more grain has been loaded atop level1101of piled granular material710, device100B delivered probes910-3through910-7according to a specified and predetermined, spaced out pattern (which may be the same or different than the pattern employed on level1101). At level1103, after more grain has been loaded atop level1102of the piled granular material710, device100B delivered probes910-8through910-15according to a specified and predetermined, spaced out pattern (which may be the same or different than the pattern employed on level1101and/or on level1102). In the illustrated embodiment, probe910-15has just been delivered relative to a preprogramed position on piled granular material710. In some embodiments, a method of probe delivery in a predetermined pattern within a bulk store, such as a grain bin may include some of the following procedures. A probe910, or set of probes910, is loaded into the probe delivery payload344of device100B. The device100B is given instructions on where to deliver the probes via a pattern selection in its programmable memory103. The device100B is placed in the bulk store700facility (or on a pile of granular material710). Granular material (e.g., grain) begins to be loaded into the bulk store700and/or onto the pile710, in some embodiments. The device100B performs a series of maneuvers on the surface of the granular material to position itself with respect to the pattern which it is executing by traversing the piled granular material710(which may be in the process of loading such as through a top loading portal701). A probe910is placed by the device100B (e.g., by controlling dispensation of the probe910from the probe delivery payload344) in the precise location when the device100B arrives through its maneuvering at a predetermined location in the programmed pattern. In some embodiments, the location is marked by device100B with the probe identification (e.g., a serial number or other number assigned to the dispensed probe910) position coordinates at the time of the delivery. Inside of a bulk store700, the position may be realized by triangulation to beacons or other suitable means such as overhead video tracking. As part of the marking, the probe identification and/or position may be stored in a memory of device100B and or wirelessly transmitted by device100B. In the same manner, according to the preprogrammed pattern, one or more additional probes910may be placed and, in some embodiments, may have their probe identification and placed position coordinates marked (i.e., recorded by device100B and/or wirelessly transmitted by device100B). FIG.12illustrates robot delivery of a payload910, which may be a probe, by a device100B in a bulk store700when triggered by detection of specified criteria, according to various embodiments. InFIG.12, device100B is illustrated delivering a probe910-1to a specific preprogrammed location1210which may be a two-dimensional location or a three-dimensional location (where the third dimension is elevation). The location may be specified as an exact set of coordinates or as a small geo-fence within which to deliver the probe910-1. A plurality of probes may be delivered in this manner to a plurality of preprogramed locations. The specified criteria discussed above, may be arrival of device100B at the predetermined location1210, however additional and/or different specified criteria may determine when/where a probe910-1is delivered. For example, device100B may deposit a temperature sensing probe910upon device100B sensing a temperature of grain in a locality of granular material it is traversing meeting a specific criteria (e.g., exceeding a threshold temperature). In some embodiments, a method of probe delivery within a bulk store700, such as a grain bin, in response to detection of specified criteria may include some of the following procedures. The probe910, or set of probes, is loaded into the probe delivery payload344of device100B. Device100B is placed in the bulk store facility700(or on a pile of granular material710). Granular material (e.g., grain) begins to be loaded into the bulk store700and/or onto the pile710, in some embodiments. The device100B performs a series of maneuvers on the surface of the piled granular material710to position itself, where the maneuvers may be automated, based on stored instructions (e.g., a pattern), based on human remote control, or some combination thereof. The device100B performs a series of readings with on-board sensors. The probe910is placed in the specific location when the sensor readings detect a predetermined condition (i.e., the specified criteria, such as grain temperature exceeding a preestablished threshold) and the device100B triggers the delivery instructions to effect dispensation of a probe from the probe delivery payload344. In some embodiments, the location is marked by device100B with the probe identification (e.g., a serial number or other number assigned to the dispensed probe910) position coordinates at the time of the delivery. Inside of a bulk store700, the position may be realized by triangulation to beacons or other suitable means such as overhead video tracking. As part of the marking, the probe identification and/or position may be stored in a memory of device100B and or wirelessly transmitted by device100B. In the same manner, one or more additional probes910may be placed and, in some embodiments, may have their probe identification and placed position coordinates marked (i.e., recorded by device100B and/or wirelessly transmitted by device100B). FIG.13illustrates robot delivery of a payload910, which may be a probe, in a bulk store700when triggered by human engagement, according to various embodiments. For example, a human1310may utilize a remote controller501to send signals1320to device100B and receive signals from device100B. InFIG.13, device100B is illustrated delivering a probe910-1upon receiving instructions from human1310which are sent via remote controller501or by other suitable means. In some embodiments, a signal1330may be wirelessly sent to remote controller501, or elsewhere, with the identification and marked location of a dispensed probe910-1. In some embodiments, a method of probe delivery within a bulk store700, such as a grain bin, in response to direction by human remote control may include some of the following procedures. The probe910, or set of probes, is loaded into the probe delivery payload344of device100B. The device100B is placed in the bulk store facility700(or on a pile of granular material710). Granular material (e.g., grain) begins to be loaded into the bulk store700and/or onto the pile710, in some embodiments. The device100B performs a series of maneuvers on the surface of the granular material to position itself, where the maneuvers may be automated, based on stored instructions (e.g., a pattern), based on human remote control, or some combination thereof. The device100B is maneuvered by human remote control to a location where it is desired to place a probe910. The probe910is placed in the specific location when the human remotely triggers device100B to provide delivery instructions to effect dispensation of a probe910from the probe delivery payload344. In some embodiments, the location is marked by device100B with the probe identification (e.g., a serial number or other number assigned to the dispensed probe910) position coordinates at the time of the delivery. Inside of a bulk store700, the position may be realized by triangulation to beacons or other suitable means such as overhead video tracking. As part of the marking, the probe identification and/or position may be stored in a memory of device100B and or wirelessly transmitted by device100B. In the same manner, human remote instruction may be used to control device100B to maneuver and place one or more additional probes and may have their probe identification and placed position coordinates marked (i.e., recorded by device100B and/or wirelessly transmitted by device100B). Piled Grain Surface Management Procedures of the methods illustrated by flow diagram1400ofFIGS.14A-14Dwill be described with reference to elements and/or components of one or more ofFIGS.1-13. It is appreciated that in some embodiments, the procedures may be performed in a different order than described in a flow diagram, that some of the described procedures may not be performed, and/or that one or more additional procedures to those described may be performed. Flow diagram1400includes some procedures that, in various embodiments, are carried out by one or more processors (e.g., host processor102or any processor of device100or a computer or system to which device100is communicatively coupled) under the control of computer-readable and computer-executable instructions that are stored on non-transitory computer-readable storage media (e.g., host memory103, other internal memory of device100, or memory of a computer or system to which device100is communicatively coupled). It is further appreciated that one or more procedures described in flow diagram1400may be implemented in hardware, or a combination of hardware with firmware and/or software. For purposes of example only, the devices100and100B (generically referred to as “device100” and/or “robot100”) is a robotic device which utilizes augers (403) to move and maneuver with respect to piled granular material, such as, but not limited to piled grain. Robot100, which operates as a piled grain surface management robot, will be described as operating on or in relation to piled grain in a bulk store, such as, but not limited to grain in a grain bin. In some embodiments, the robot100is free of mechanical coupling with a structure (e.g., the bulk store) in which the piled grain is contained. For example, in some embodiments, there is no tether or safety harness coupling the robot100to the grain storage bin and it operates autonomously or under wireless remote control. In some embodiments, robot100performs the method of flow diagram1400completely autonomously. In some embodiments, robot100performs the method of flow diagram1400semi-autonomously such as by measuring a slope of grain, sending the slope to an external computer system which then determines a pattern for robot100to autonomously execute when traversing the piled grain. In some embodiments, robot100performs the method of flow diagram1400semi-autonomously such as by receiving a remotely measured slope of grain, then autonomously determining a pattern for robot100to autonomously execute when traversing the piled grain. FIGS.14A-14Dillustrate a flow diagram1400of an example method of surface management of piled grain, in accordance with various embodiments. With reference toFIG.14A, at procedure1410of flow diagram1400, in various embodiments, a robot100which includes a processor102, a memory103, and an auger-based drive system (e.g., augers403) receives, instructions to traverse a surface of piled grain in a bulk store. In some embodiments, the instructions may be received wirelessly from a remotely located computer system (506,605,604, etc.) or wirelessly from a remote controller501operated by a human (i.e., a human may drive the robot100remotely). In some embodiments, the instructions may be preprogrammed into robot100. In some embodiments, the instructions are for the robot100to follow a predetermined pattern of movement to traverse the surface of the piled grain. With continued reference toFIG.14A, at procedure1420of flow diagram1400, in various embodiments, a processor (e.g., processor102) of robot100controls movement of robot100according to the instructions. Via commands to motor controllers105and/or drive motors106of an auger-based drive system, the robot100is controlled to traverse a surface of piled grain710in a bulk store700. As a result of the traversal, a crust layer of the surface is broken up by auger rotation of the auger-based drive system during the traversal. That is, the augers churn the surface of the piled grain710to a depth of one to several inches, thus breaking up surface crust and crust which may form a grain bridge over a void in the piled grain710. Breaking the crust in this manner allows grain below the crust to dry more evenly and prevents spoilage that can result from the crust on the surface. Additionally, breaking up crusts which are part of a grain bridge assists in the flow of the grain when the grain is removed from the bulk store and improves human safety, should a human need to enter and walk upon the surface of the piled grain710. The traversal may be according to a pattern, many of which have been depicted and described herein. With continued reference toFIG.14A, at procedure1430of flow diagram1400, in various embodiments, robot100the processor directs, according to the instructions, traversal by the robot of a sloped portion of the piled grain to incite sediment gravity flow in the sloped portion of piled grain by disruption of viscosity of the sloped portion of piled grain through agitation of the sloped portion of the piled grain by the auger rotation of the auger-based drive system, wherein the sediment gravity flow reduces a slope of the sloped portion. As described herein, the sediment gravity flow is, effectively, a purposely induced landslide. The sloped portion may be sought out by the robot100, in some embodiments. In some embodiments, the traversal of one or more sloped portions is repeated to bring reduce the slope of the sloped portion more toward level, which may be realized by bringing the slope below a threshold slope such between +/− 5 degrees, between +/− 4 degrees, +/− 2 degrees, or +/− 1 degree. In some embodiments, the traversal of one or more sloped portions is repeated to bring reduce the slope of the sloped portion more toward level by reducing the slope by a predetermined amount such as 3 degrees, 5 degrees, 10 degrees, etc. With reference toFIG.14B, at procedure1440of flow diagram1400, in various embodiments, during traversal of a portion (e.g., portion720) of piled grain by robot100, a sensor120of robot100acts under instruction of host processor102to capture a measurement of a characteristic of the portion of piled granular. Some example characteristics include, but are not limited to, capturing a measurement of: temperature, humidity, moisture, gas composition, electrostatic nature, and/or electrochemical nature. A measured characteristic may also comprise an optical and/or infrared image. The captured measurement of a characteristic can be stored within memory103or transmitted from robot100. In some embodiments, the captured measurement of a characteristic is paired with a location of robot100at the time of capture of the measurement. Such paired data can be used to create a characteristic map of the piled grain which is traversed by robot100. In some embodiments, the captured measurement(s) of characteristic(s) may be transmitted to a base station (506,605) that is/are communicatively coupled with robot100. The base station (506,605) is located remotely from the robot and may be configured to communicate the with robot100over the Internet, via a wide-area network, via a peer-to-peer communication, or by other means. Via such communications, the base station (506,605) may receive data collected by robot100(including motion sensor data) collected by the robot during the traversal of the portion of piled grain. Additionally, or alternatively, via such communications, the base station (506,605) may relay instructions to robot100. In some embodiments, the captured measurement(s) of characteristic(s) may be transmitted to a cloud-based602storage603and/or processing604which is/are communicatively coupled with robot100. The cloud-based infrastructure602may be utilized to process data, store data, make data available to other devices (e.g., computer605), and/or relay information or instructions from other devices (e.g., computer605) to robot100. With reference toFIG.14C, at procedure1450of flow diagram1400, in various embodiments, a temperature sensor233, infrared sensor236, or infrared camera108of robot100is used to capture a temperature measurement of a portion (e.g., portion720) of piled grain during the traversal of the portion of piled grain. In some embodiments, the captured measurement of a characteristic is paired with a location of robot100at the time of capture of the temperature measurement. Such paired data can be used to create a heat map of the piled grain which is traversed by robot100. Additionally, temperature data can provide an operator of the bulk store information about the conditions of storage, quality of grain, and/or identify areas for additional traversal to prevent crust formation and ensure air circulation. With reference toFIG.14D, at procedure1460of flow diagram1400, in various embodiments, a probe delivery payload344delivers a probe910onto a surface of the piled grain710. As described herein, the probe may have sensor which measure and report conditions of the grain. The probe may be delivered during load-in of grain, and thus become buried in grain. This may facilitate, over time, positioning of probes which provide measurements at different levels within a column of piled grain710. Such delivery of probes may be based on preprogrammed positions in a patter, coordinate locations, human direction, or automated response of robot100B upon detecting a particular characteristic (e.g., grain temperature above a preset threshold). Conclusion The examples set forth herein were presented in order to best explain, to describe particular applications, and to thereby enable those skilled in the art to make and use embodiments of the described examples. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” “various embodiments,” “some embodiments,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any embodiment may be combined in any suitable manner with one or more other features, structures, or characteristics of one or more other embodiments without limitation. | 82,438 |
11858146 | DESCRIPTION OF EXEMPLARY EMBODIMENTS As below, a working method and a robot system according to the present disclosure will be explained in detail based on preferred embodiments shown in the accompanying drawings. First Embodiment FIG.1is a perspective view showing an overall configuration of a robot system according to a first embodiment of the present disclosure.FIG.2is a perspective view showing a first hand and a second hand.FIGS.3and4are flowcharts respectively showing working processes. A robot system1shown inFIG.1performs work of e.g. feeding, removal, transport, assembly, etc. of precision apparatuses and components forming the apparatuses. The robot system1has a robot2, a hand group5including a plurality of hands, a control apparatus6that controls driving of the robot2, and a hand temporarily fixing device7. Further, in the robot system1, one hand selected from the plurality of hands of the hand group5, a first hand51and a second hand52in the embodiment, according to a weight of an object Q as a working object such as the precision apparatus or the component is attached to the robot2and work is performed. As shown inFIG.1, the robot2is a six-arm robot having six drive axes. The robot2has a base21fixed to a floor or a ceiling and an arm22pivotably coupled to the base21. Further, the arm22has a first arm221pivotably coupled to the base21, a second arm222pivotably coupled to the first arm221, a third arm223pivotably coupled to the second arm222, a fourth arm224pivotably coupled to the third arm223, a fifth arm225pivotably coupled to the fourth arm224, a sixth arm226pivotably coupled to the fifth arm225, and a hand coupling portion227provided at a distal end side of the sixth arm226. A hand according to work to be executed by the robot2is selected from the plurality of hands of the hand group5and attached to the hand coupling portion227. Note that the hand coupling portion227includes e.g. a master plate of an automatic tool changer. Further, the robot2has a force detection sensor28placed between the sixth arm226and the hand coupling portion227. The force detection sensor28detects a force applied to the hand coupled to the hand coupling portion227. The configuration of the force detection sensor28is not particularly limited, but may be e.g. a configuration having a pressure receiving member of crystal quartz and detecting the applied force based on magnitude of electric charge generated when the pressure receiving member receives the force. The placement of the force detection sensor28is not particularly limited as long as the sensor may detect the force applied to the hand. Or, the force detection sensor28may be omitted. Furthermore, the robot2has a first arm pivot mechanism231placed in a joint portion between the base21and the first arm221and pivoting the first arm221relative to the base21, a second arm pivot mechanism232placed in a joint portion between the first arm221and the second arm222and pivoting the second arm222relative to the first arm221, a third arm pivot mechanism233placed in a joint portion between the second arm222and the third arm223and pivoting the third arm223relative to the second arm222, a fourth arm pivot mechanism234placed in a joint portion between the third arm223and the fourth arm224and pivoting the fourth arm224relative to the third arm223, a fifth arm pivot mechanism235placed in a joint portion between the fourth arm224and the fifth arm225and pivoting the fifth arm225relative to the fourth arm224, and a sixth arm pivot mechanism236placed in a joint portion between the fifth arm225and the sixth arm226and pivoting the sixth arm226relative to the fifth arm225. These first to sixth arm pivot mechanisms231to236include e.g. motors as drive sources, reducers, controllers controlling driving of the motors, encoders detecting amounts of displacement (pivot angles) of the arms, etc. The hand group5has the plurality of hands. As shown inFIG.1, in the embodiment, the hand group5has the first hand51and the second hand52as the plurality of hands. The first hand51and the second hand52have the same configuration except that an assist device9is provided for the first hand51, but the assist device9is not provided for the second hand52. As shown inFIG.2, each of the first hand51and the second hand52has a base portion531, a pair of claw portions532,533movably coupled to the base portion531, and a claw portion opening and closing mechanism535that moves the pair of claw portions532,533closer to and away from each other. Further, a robot coupling portion534to be coupled to the hand coupling portion227of the robot2is provided in the base portion531. The robot coupling portion534includes e.g. a tool plate of an automatic tool changer. The claw portion opening and closing mechanism535includes e.g. a motor as a drive source, a reducer, a controller controlling driving of the motor, an encoder detecting amount of displacement (amount of movement) of the claw portion532or533, etc. A coupling portion91of the assist device9is coupled only to the first hand51. In the first hand51and the second hand52, the pair of claw portions532,533are moved closer to each other to nip the object Q, thereby, may catch (grip) the object Q, and the pair of claw portions532,533are moved away from each other, thereby, may release the gripped object Q. Note that the configurations of the first hand51and the second hand52are respectively not particularly limited, but may be any configurations suitable for work. For example, the first hand51and the second hand52may respectively have configurations that grip the object Q by another method such as vacuum suction or magnetic suction than nipping. Or, the configurations of the first hand51and the second hand52may be different from each other. The hand group5may have at least one another hand in addition to the above described first hand51and second hand52. In this case, as a third hand, a hand that grips the object Q in the same manner as the first hand51and the second hand52may be used, or a hand having various tools and performing various other kinds of work including screwing, cutting, deburring, finishing, welding, inspection, and imaging than gripping may be used. As shown inFIG.1, when the first hand51and the second hand52are not coupled to the robot2, the hands are temporarily fixed to the hand temporarily fixing device7. As described above, the unused hands are temporarily fixed to the hand temporarily fixing device7and storage of the hands is easier. Further, the hand and the robot2may be smoothly coupled and the attachment and detachment work of the hand to the robot2may be easier. Note that, not limited to the above described configuration, but the hand temporarily fixing device7may be omitted and the unused hands may be placed in predetermined positions and predetermined attitudes, or randomly placed in any positions and attitudes. Next, the assist device9not provided for the second hand52, but provided only for the first hand51will be explained. The assist device9hoists and supports the first hand51, and thereby, gives assistance to the robot2to assist transportation of the object Q by the robot2. The assist device9is not particularly limited, but includes, in the embodiment, as shown inFIGS.1and2, a chain91with one end coupled to the first hand51and a hoisting device92placed on the ceiling and hoisting or feeding the chain91with the motion of the arm22. According to the assist device9, when the robot2with the first hand51grips and transports the object Q, the hoisting device92hoists the chain91and supports the first hand51, and thereby, a load transmitted from the object Q to the robot2may be reduced, preferably to zero. Therefore, the object Q may be smoothly transported by the robot2. Further, the object Q having a weight that can be transported by the robot2or more may be transported by the robot2. In other words, the heavier object Q can be transported using the robot2having lower transportable performance (transportable weight). Note that the assist device9is not particularly limited as long as the device may exert the above described functions. For example, the hoisting device92may have a configuration movable along a rail or the like placed on the ceiling and moving according to the motion of the arm22automatically or while being pulled by the arm22. Thereby, during work, the first hand51may be continuously supported from substantially directly above, and assistance may be given to the robot2more efficiently. The control apparatus6respectively independently controls driving of the first arm pivot mechanism to sixth arm pivot mechanism231to236, the claw portion opening and closing mechanism535, and the hoisting device92based on e.g. commands from a host computer (not shown). The control apparatus6includes e.g. a computer having a processor (CPU) that processes information, a memory communicably connected to the processor, and an external interface. Further, various programs that can be executed by the processor are stored in the memory and the processor may read and execute various programs stored in the memory etc. As above, the configuration of the robot system1is briefly explained. Next, a working method using the robot system1will be explained. The control apparatus6switches between an assisted work state in which the first hand51is coupled to the robot2and work is performed with assistance by the assist device9and a non-assisted work state in which the second hand52is coupled to the robot2and work is performed without assistance by the assist device9according to the weight of the object Q. As below, as shown inFIG.3, an example of assembling a projector PRO by performing a first transport step S1at which the robot2transports the object Q to a working space P, a working step S2at which the robot2performs work with increase or decrease in weight on the object Q in the working space P, and a second transport step S3at which the robot2transports the object Q after the working step S2out of the working space P will be explained. In this example, for assembly of the projector PRO, at the working step S2, work with increase in weight is performed on the object Q. At the first transport step S1, the object Q as a component and other components R of the projector PRO are transported to the working space P. Note that the components of the projector PRO include e.g. a light source that outputs light, a color separation optical device that separates the light into red, green, and blue, and a spatial light modulation device having liquid crystal panels for the respective colors and modulating luminous fluxes of the respective colors according to image signals, a prism that generates picture light by combining the luminous fluxes of the respective colors modulated by the spatial light modulation device, a projection lens that enlarges and projects the picture light generated by the prism at a desired enlargement factor, a cooling fan, and a package housing these respective components and having a housing and a lid member. Hereinafter, for convenience of explanation, the housing is the object Q and the other components are the components R. In the embodiment, the object Q and the respective components R have weights equal to or less than a rated weight that can be transported by the robot2. Accordingly, at the first transport step S1, as step S11, under control of the control apparatus6, first, the hand coupling portion227of the robot2is coupled to the robot coupling portion534of the second hand52temporarily fixed to the hand temporarily fixing device7and the second hand52without the assist device9is attached to the robot2. Then, as step S12, under control of the control apparatus6, the object Q and the respective components R placed outside of the working space P are gripped by the second hand52and transported into the working space P by movement of the arm22. That is, the work is performed in the non-assisted work state. The object Q and the respective components R have the weights equal to or less than the rated weight, and can be transported only by the robot2without assistance by the assist device9. Therefore, at the first transport step S1, the second hand52without the assist device9is selected as a hand attached to the robot2into the non-assisted work state, and thereby, the transport speeds of the object Q and the respective components R may be increased and the time taken for the step may be shortened. At the working step S2, the projector PRO is assembled by packaging of the respective components R in the object Q transported into the working space P. Specifically, the light source, the color separation optical device, the spatial light modulation device, the prism, the projection lens, the cooling fan, etc. as the components R are sequentially packaged in predetermined locations of the housing as the object Q and finally sealed by the lid member, and thereby, the projector PRO is assembled. Also, at the step, gripping of the respective components R having the weights equal to or less than the rated weight, transportation to the predetermined locations, and packaging are performed, and the second hand52is used successively from the first transport step S1. That is, the work is performed in the non-assisted work state. At the working step S2, as step S21, under control of the control apparatus6, the projector PRO is assembled by sequentially gripping of the respective components R using the second hand52and packaging of the components in the predetermined locations of the housing as the object Q. Note that, in the middle of the working step S2, when work hard for the second hand52is necessary, the second hand52may grip a tool and perform the work at each time. Or, for example, when the hand group5has another third hand suitable for the work than the first, second hands51,52, the hand attached to the robot2is changed from the second hand52to the third hand and the work may be performed using the third hand. In the embodiment, the projector PRO obtained at the working step S2has a weight more than the rated weight that can be transported by the robot2. Accordingly, the transportation of the projector PRO is difficult by the robot2itself. On this account, at the second transport step S3, as step S31, under control of the control apparatus6, first, the second hand52is detached from the robot2and the first hand51is attached to the robot2. Then, as step S32, under control of the control apparatus6, the projector PRO placed within the working space P is gripped by the first hand51and transported out of the working space P by movement of the arm22with assistance by the assist device9. That is, the work is performed in the assisted work state. Thereby, the projector PRO can be transported and the transportation may be smoothly performed. Next, contrary to the above described example, a working method of disassembling the projector PRO including the object Q is explained. As shown inFIG.4, this work dissembles the projector PRO by performing a first transport step S4at which the robot2transports the projector PRO to the working space P, a working step S5at which the robot2performs work with increase or decrease in weight on the projector PRO in the working space P, and a second transport step S6at which the robot2transports the object Q after the working step S5out of the working space P. In this example, for disassembly of the projector PRO, at the working step S4, the work with increase or decrease in weight is performed on the object Q. At the first transport step S4, the projector PRO is transported to the working space P. In the embodiment, the projector PRO has a weight more than the rated weight. Accordingly, the transportation of the projector PRO is difficult by the robot2itself. On this account, at the first transport step S4, as step S41, under control of the control apparatus6, first, the first hand51is attached to the robot2. Then, as step S42, under control of the control apparatus6, the projector PRO placed outside of the working space P is gripped by the first hand51and transported into the working space P by movement of the arm22with assistance by the assist device9. That is, the work is performed in the assisted work state. Thereby, the projector PRO can be transported and the transportation may be smoothly performed. At the working step S5, the projector PRO transported into the working space P is dissembled, and the object Q and the respective components R are removed from the housing and the respective parts are separated from one another. At this step, work of gripping and removing the object Q and the respective components R from the housing is performed. The object Q and the respective components R respectively have weights equal to or less than the rated weight, and this step may be performed by the robot2itself. Accordingly, at the working step S5, as step S51, under control of the control apparatus6, first, the first hand51is detached from the robot2and the second hand52is attached to the robot2. Then, as step S52, under control of the control apparatus6, the projector PRO is disassembled using the second hand52and the object Q and the respective components R are separated from one another. That is, this work is performed in the non-assisted work state. As described above, the work is performed using the second hand52without the assist device9on the object Q and the respective components R having the weights equal to or less than the rated weight, and thereby, the working speed of the work may be increased and the time taken for the step may be shortened. At the second transport step S6, the object Q and the respective components R are transported out of the working space P. As described above, the object Q and the respective components R respectively have weights equal to or less than the rated weight. Accordingly, also, at the step, the second hand52is used successively from the working step S5. That is, the work is performed in the non-assisted work state. At the second transport step S6, as step S61, under control of the control apparatus6, the object Q and the respective components R are sequentially gripped by the second hand52and transported out of the working space P by movement of the arm22. As described above, the second hand52without the assist device9is selected as the hand attached to the robot2, and thereby, the transportation speeds of the object Q and the respective components R may be increased and the time taken for the step may be shortened. As in the above described assembly work and disassembly work, the assisted work state and the non-assisted work state are switched according to the weight of the object Q, and thereby, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, according to the working method, the transportation of the object Q may be smoothly performed. Particularly, as in the above described working method, work with increase in weight is performed on the object Q at the working step S2and the object Q has the weight equal to or less than the rated weight before the working step S2, however, the projector PRO including the object Q has a weight more than the rated weight after the working step S2. As described above, when it is necessary to transport the object Q having the weight equal to or less than the rated weight at one of the first transport step S1and the second transport step S3and transport the object Q having the weight more than the rated weight at the other, if the assisted work state and the non-assisted work state may be switched, the above described merits of the working method may be especially enjoyed. Particularly, in the embodiment, the robot2has the force detection sensor28. Accordingly, the force applied to the force detection sensor28is fed back to the control apparatus6, and thereby, for example, the coupling between the robot2and the first hand51or the second hand52may be performed at a lower failure frequency in a shorter time. Further, when the first hand51is attached to the robot2and the object Q having the weight more than the rated weight is gripped by the first hand51, driving of the hoisting device92is controlled so that the force applied to the force detection sensor28may be smaller, preferably zero, and thereby, transportation of the object Q having the weight more than the rated weight may be performed more reliably and smoothly. For gripping and lifting the object Q, the weight of the object Q may be known from the force applied to the force detection sensor28and the hand may be changed to appropriate one. As above, the robot system1and the working method using the robot system1are explained. As described above, the robot system1includes the robot2, the first hand51with the assist device9, the second hand52without the assist device9, and the control apparatus6that controls the robot2with the first hand51or the second hand52attached thereto to perform work with increase or decrease in weight on the object Q. The control apparatus6switches between the assisted work state in which the first hand51is coupled to the robot2and work is performed with assistance by the assist device9and the non-assisted work state in which the second hand52is coupled to the robot2and work is performed without assistance by the assist device9according to the weight of the object Q. According to the system, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, the work may be smoothly performed. Further, as described above, the robot2has the force detection sensor28that detects the force applied to the robot2. Accordingly, the force applied to the force detection sensor28is fed back to the control apparatus6, and thereby, for example, the coupling between the robot2and the first hand51or the second hand52may be performed at a lower failure frequency in a shorter time. Further, when the first hand51is attached to the robot2and the object Q having the weight more than the rated weight is gripped by the first hand51, driving of the hoisting device92is controlled so that the force applied to the force detection sensor28may be smaller, and thereby, transportation of the object Q having the weight more than the rated weight may be performed more reliably and smoothly. For gripping and lifting the object Q, the weight of the object Q may be known from the force applied to the force detection sensor28and the hand may be changed to appropriate one. As described above, the working method of performing work with increase or decrease in weight on the object Q by the robot system1having the robot2, the first hand51with the assist device9, and the second hand52without the assist device9includes switching between the assisted work state in which the first hand51is coupled to the robot2and work is performed with assistance by the assist device9and the non-assisted work state in which the second hand52is coupled to the robot2and work is performed without assistance by the assist device9according to the weight of the object Q. According to the method, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, the work may be smoothly performed. Further, as described above, the working method has the first transport steps S1, S4of transporting the object Q to the working space P by the robot2, the working steps S2, S5of performing work with increase or decrease in weight on the object Q by the robot2in the working space P, and the second transport steps S3, S6of transporting the object Q after the working steps S2, S5out of the working space P by the robot2, and, at the first transport steps S1, S4and the second transport steps S3, S6, the assisted work state and the non-assisted work state are switched according to the weight of the object Q. According to the method, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, the work may be smoothly performed. Furthermore, as described above, when the weight of the object Q increases at the working step S2, the first transport step S1is performed in the non-assisted work state and the second transport step S3is performed in the assisted work state. Contrarily, when the weight of the object Q decreases at the working step S5, the first transport step S4is performed in the assisted work state and the second transport step S6is performed in the non-assisted work state. According to the method, the work may be smoothly performed. Second Embodiment FIG.5is a perspective view showing an overall configuration of a robot system according to a second embodiment of the present disclosure.FIGS.6and7are flowcharts respectively showing working processes.FIGS.8to12respectively show examples of a switching mechanism. A robot system1according to the embodiment is the same as the robot system1of the above described first embodiment except that the second hand52is omitted, the first transport steps S1, S4and the second transport steps S3, S6are performed using the first hand51having the assist device9, and the configuration of the assist device9is different. Accordingly, in the following description, the robot system1of the second embodiment will be explained with a focus on the differences from the above described first embodiment and the explanation of the same items will be omitted. InFIGS.5to12, the same configurations as those of the above described embodiment have the same signs. As shown inFIG.5, in the robot system1of the embodiment, the first hand51having the assist device9as a hand is attached to the robot2. Further, the assist device9has the chain91with one end coupled to the first hand51, the hoisting device92placed on the ceiling and hoisting the chain91with the motion of the arm22, and a switching mechanism93of switching between a coupled state in which the robot2is assisted via the first hand51and a decoupled state in which the robot2is not assisted. The switching between the coupled state and the decoupled state is performed by the control apparatus6. Next, a working method using the robot system1will be explained. The control apparatus6switches between the coupled state in which the robot2is assisted via the first hand51and the decoupled state in which the robot2is not assisted according to the weight of the object Q. As below, as is the case with the above described first embodiment, as shown inFIG.6, an example of assembling the projector PRO by performing the first transport step S1at which the robot2transports the object Q to the working space P, the working step S2at which the robot2performs work with increase or decrease in weight on the object Q in the working space P, and the second transport step S3at which the robot2transports the object Q after the working step S2out of the working space P will be explained. In this example, for assembly of the projector PRO, at the working step S2, work with increase in weight is performed on the object Q. At the first transport step S1, the object Q as a component and other components R of the projector PRO are transported to the working space P. The object Q and the respective components R have weights equal to or less than a rated weight that can be transported by the robot2. Accordingly, at the first transport step S1, as step S101, under control of the control apparatus6, first, the assist device9is decoupled. When the assist device9is decoupled, the hoisting device92is fixed and the length of the switching mechanism93or the chain91varies according to the motion of the robot2. Then, as step S102, under control of the control apparatus6, the object Q and the respective components R placed outside of the working space P are gripped by the first hand51, and transported into the working space P by movement of the arm22. That is, the work is performed in the non-assisted work state. The object Q and the respective components R have the weights equal to or less than the rated weight and can be transported only by the robot2without assistance from the assist device9. Therefore, at the first transport step S1, the assist device9is decoupled, and thereby, the transport speeds of the object Q and the respective components R may be increased and the time taken for the step may be shortened. At the working step S2, as step S201, under control of the control apparatus6, the projector PRO is assembled by sequentially gripping of the respective components R using the first hand51and packaging of the components in predetermined locations of the housing as the object Q. The projector PRO obtained at the working step S2has a weight more than the rated weight that can be transported by the robot2. Accordingly, the transportation of the projector PRO is difficult by the robot2itself. On this account, at the second transport step S3, as step S301, under control of the control apparatus6, first, the assist device9is coupled. When the assist device9is coupled, the length of the switching mechanism93and the chain91are fixed and the hoisting device92executes an operation to hoist or feed the chain91according to the motion of the robot2. Note that, for example, when the degree of stretch of the switching mechanism93is larger, particularly, when the switching mechanism93is fully stretched, the hoisting device92may feed the chain91so that the degree of stretch of the switching mechanism93may be smaller before the assist device9is coupled. Thereby, the length of the chain91that can be hoisted in the coupled state may be made longer. Further, when the coupled state is changed to the decoupled state again, the switching mechanism93may be sufficiently stretched and is harder to hinder the motion of the robot2. Then, as step S302, under control of the control apparatus6, the projector PRO placed within the working space P is gripped by the first hand51and transported out of the working space P by movement of the arm22with assistance by the assist device9. That is, the work is performed in the assisted work state. Thereby, the projector PRO can be transported and the transportation may be smoothly performed. Next, contrary to the above described example, a working method of disassembling the projector PRO including the object Q is explained. As shown inFIG.7, this work dissembles the projector PRO by performing the first transport step S4at which the robot2transports the projector PRO to the working space P, the working step S5at which the robot2performs work with increase or decrease in weight on the projector PRO in the working space P, and the second transport step S6at which the robot2transports the object Q after the working step S5out of the working space P. In this example, for disassembly of the projector PRO, at the working step S4, the work with decrease in weight is performed on the object Q. At the first transport step S4, the projector PRO is transported to the working space P. The projector PRO has a weight more than the rated weight. Accordingly, the transportation of the projector PRO is difficult by the robot2itself. On this account, at the first transport step S4, as step S401, under control of the control apparatus6, first, the assist device9is coupled. When the assist device9is coupled, the length of the switching mechanism93and the chain91is fixed and the hoisting device92executes an operation to hoist or feed the chain91according to the motion of the robot2. Note that, for example, when the degree of stretch of the switching mechanism93is larger, particularly, when the switching mechanism93is fully stretched, the hoisting device92may feed the chain91so that the degree of stretch of the switching mechanism93may be smaller before the assist device9is coupled. Thereby, the length of the chain91that can be hoisted in the coupled state may be made longer. Further, when the coupled state is changed to the decoupled state again, the switching mechanism93may be sufficiently stretched and is harder to hinder the motion of the robot2. Then, as step S402, under control of the control apparatus6, the projector PRO placed out of the working space P is gripped by the first hand51and transported into the working space P by movement of the arm22with assistance by the assist device9. That is, the work is performed in the assisted work state. Thereby, the projector PRO can be transported and the transportation may be smoothly performed. At the working step S5, the projector PRO transported into the working space P is disassembled, and the object Q and the respective components R are removed from the housing and the respective parts are separated from one another. The object Q and the respective components R respectively have weights equal to or less than the rated weight. Accordingly, the work may be performed only by the robot2without assistance from the assist device9. Therefore, at the working step S5, as step S501, under control of the control apparatus6, first, the assist device9is decoupled. When the assist device9is decoupled, the hoisting device92is fixed and the length of the switching mechanism93or the chain91varies according to the motion of the robot2. Then, as step S502, under control of the control apparatus6, the projector PRO is disassembled using the first hand51, and the object Q and the respective components R are separated from one another. That is, this work is performed in the non-assisted work state. As described above, the assist device9is decoupled, and thereby, the working speed of disassembly may be increased and the time taken for the step may be shortened. At the second transport step S6, the object Q and the respective components R are transported out of the working space P. The object Q and the respective components R respectively have weights equal to or less than the rated weight. Accordingly, also, at the step, the assist device9is decoupled successively from the working step S5. At the second transport step S6, as step S601, under control of the control apparatus6, the object Q and the respective components R are sequentially gripped by the first hand51and transported out of the working space P by movement of the arm22. That is, the work is performed in the non-assisted work state. As described above, the assist device9is decoupled, and thereby, the transportation speed may be increased and the time taken for the step may be shortened. As in the above described assembly work and disassembly work, the coupled state and the decoupled state of the assist device9are switched according to the weight of the object Q, and thereby, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, according to the working method, transportation of the object Q may be smoothly performed. Particularly, as in the above described working method, work with change in weight is performed on the object Q at the working step S2and the object Q has the weight equal to or less than the rated weight before the working step S2, however, the projector PRO including the object Q has the weight more than the rated weight after the working step S2. Or, the object Q has the weight more than the rated weight before the working step S5, however, the projector PRO including the object Q has a weight equal to or less than the rated weight after the working step S5. As described above, when it is necessary to transport the object Q having the weight equal to or less than the rated weight at ones of the first transport steps S1, S4or the second transport steps S3, S6and transport the object Q having the weight more than the rated weight at the others, if the coupled state and the decoupled state of the assist device9may be switched, the above described merits of the working method may be especially enjoyed. Note that the switching mechanism93is not particularly limited as long as the mechanism may switch between the coupled state and the decoupled state. For example, as shown inFIGS.8and9, the switching mechanism93has a tube931with closed ends, a plurality of rings932provided around the outer circumference of the tube931, an application portion933that applies air into the tube931. The switching mechanism93is placed in the middle of the chain91. In this configuration, as shown inFIG.8, the tube931is flexible and stretchable unless air is applied into the tube931from the application portion933. Under the condition, even when the hoisting device92hoists the chain91, the tube931stretches, and the force is absorbed and not transmitted to the first hand51and the first hand51is not supported. On the other hand, as shown inFIG.9, when air is applied into the tube931, the tube931expands and contracts in the longitudinal directions and becomes harder. Under the condition, when the hoisting device92hoists the chain91, the force is transmitted to the first hand51via the hard tube931and the first hand51is supported. Accordingly, the decoupled state in which the robot2is not assisted is obtained without application of air from the application portion933into the tube931, and the coupled state in which the robot2is assisted is obtained by application of air from the application portion933into the tube931. As another example, as shown inFIGS.10and11, the switching mechanism93has a cylinder934, a piston935displaceable in upward and downward directions relative to the cylinder934, a pulling spring936urging the piston935upward, and an application portion937applying air to a space9340within the cylinder934. The switching mechanism93is placed in the middle of the chain91and the upper end portion of the piston935and the lower end portion of the cylinder934are coupled to the chain91. In this configuration, as shown inFIG.10, the pulling spring936is stretchable unless air is applied into the space9340from the application portion937. Under the condition, even when the hoisting device92hoists the chain91, the pulling spring936stretches, and the force is absorbed and not transmitted to the first hand51and the first hand51is not supported. On the other hand, as shown inFIG.11, when air is applied into the space9340, the piston935moves to the lower end of the cylinder934against the urging force by the pulling spring936and the state is held. Under the condition, when the hoisting device92hoists the chain91, the force is transmitted to the first hand51via the piston935and the cylinder934and the first hand51is supported. Accordingly, the decoupled state in which the robot2is not assisted is obtained without application of air from the application portion937into the space9340, and the coupled state in which the robot2is assisted is obtained by application of air from the application portion937into the space9340. Further, as another example, as shown inFIG.12, the switching mechanism93has a case938, a spiral spring939housed within the case938, deformed by an external force, and having a part pulled out of the case938, and a lock mechanism930allowing or blocking deformation of the spiral spring939. The switching mechanism93is placed in the middle of the chain91, and the upper end portion of the case938and one end portion of the spiral spring939are coupled to the chain91. In the configuration, the deformation of the spiral spring939is allowed by the lock mechanism930, and thereby, even when the hoisting device92hoists the chain91, the force is absorbed by the deformation of the spiral spring939and not transmitted to the first hand51and the first hand51is not supported. On the other hand, the deformation of the spiral spring939is blocked by the lock mechanism930, and thereby, when the hoisting device92hoists the chain91, the force is transmitted to the first hand51via the case938and the spiral spring939and the first hand51is supported. Accordingly, the decoupled state in which the robot2is not assisted is obtained by allowance of the deformation of the spiral spring939by the lock mechanism930, and the coupled state in which the robot2is assisted is obtained by blocking of the deformation of the spiral spring939by the lock mechanism930. As described above, the working method of the embodiment is the working method of performing work with increase or decrease in weight on the object Q by the robot system1having the robot2, the first hand51as a hand coupled to the robot2and used, and the assist device9assisting the robot2, including switching between the assisted work state in which work is performed by the robot2with assistance from the assist device9and the non-assisted work state in which work is performed by the robot2without assistance from the assist device9according to the weight of the object Q. According to the method, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, the work may be smoothly performed. Further, as described above, the working method has the first transport steps S1, S4of transporting the object Q to the working space P by the robot2, the working steps S2, S5of performing work with increase or decrease in weight on the object Q by the robot2in the working space P, and the second transport steps S3, S6of transporting the object Q after the working steps S2, S5out of the working space P by the robot2, and, at the first transport steps S1, S4and the second transport steps S3, S6, the assisted work state and the non-assisted work state are switched according to the weight of the object Q. According to the method, when necessary, the object Q may be transported more reliably with assistance from the assist device9and, when not necessary, the object Q may be transported more quickly without assistance from the assist device9. Therefore, the work may be smoothly performed. Furthermore, as described above, when the weight of the object Q increases at the working step S2, the first transport step S1is performed in the non-assisted work state and the second transport step S3is performed in the assisted work state. Contrarily, when the weight of the object Q decreases at the working step S5, the first transport step S4is performed in the assisted work state and the second transport step S6is performed in the non-assisted work state. According to the method, the work may be smoothly performed. According to the second embodiment, the same effects as those of the above described first embodiment may be exerted. Third Embodiment FIG.13is a perspective view showing a robot according to a third embodiment of the present disclosure. A robot system1according to the embodiment is the same as the robot system1of the above described first embodiment except that the robot2is placed on an unmanned transport vehicle8. Accordingly, in the following description, the robot system1of the third embodiment will be explained with a focus on the differences from the above described first embodiment and the explanation of the same items will be omitted. InFIG.13, the same configurations as those of the above described embodiments have the same signs. As shown inFIG.13, in the robot system1of the embodiment, the robot2is placed on the unmanned transport vehicle8. Accordingly, the robot2may move to various locations and the working range thereof is significantly wider. For example, the unmanned transport vehicle8travels with the object Q gripped, and thereby, the object Q may be transported from a far place to the working space P and, contrarily, the object Q may be transported from the working space P to a far place. The unmanned transport vehicle8is not particularly limited to, but includes e.g. an AMR (Autonomous Mobile Robot) and an AGV (Automatic Guided Vehicle). According to the third embodiment, the same effects as those of the above described first embodiment may be exerted. As above, the working method and the robot system according to the present disclosure are explained based on the illustrated embodiments, however, the present disclosure is not limited to those. The configurations of the respective parts may be replaced by arbitrary configurations having the same functions. Or, another arbitrary configuration may be added to the present disclosure. Or, the respective embodiments may be appropriately combined. | 45,075 |
11858147 | MODES FOR CARRYING OUT THE DISCLOSURE Next, embodiments of the present disclosure are described with reference to the drawings. Outline Configuration of Substrate Transfer Robot1 FIG.1is a side view schematically illustrating a substrate transfer robot1according to one embodiment of the present disclosure, andFIG.2is a plan view schematically illustrating the substrate transfer robot1illustrated inFIG.1. As illustrated inFIGS.1and2, the substrate transfer robot1includes a robot body10, a controller15which controls operation of the robot body10, and a calibration jig9. The substrate transfer robot1performs carrying a substrate W into a substrate placing part (not illustrated) (loading), and taking the substrate W out of the substrate placing part (unloading). For example, the substrate transfer robot1may be provided to a system which conveys or transfers various kinds of substrates W, such as an EFEM (Equipment Front End Module), a sorter, and a substrate processing system. Configuration of Robot Body10 The robot body10includes a pedestal11, a robotic arm (hereinafter, referred to as an “arm12”) supported by the pedestal11, a substrate hold hand (hereinafter, referred to as a “hand13”) serially provided to a distal end part of the arm12, and a photoelectric sensor4of a transmission type provided to the hand13. Note that, although the transmission type photoelectric sensor is adopted as the photoelectric sensor4in this embodiment, a retroreflective or regression reflective photoelectric sensor may be adopted instead of the transmission type photoelectric sensor. The arm12according to this embodiment is comprised of a first link21extending horizontally, and a second link22coupled to the first link21through a translation joint. The first link21is provided with a translation device63, and operation of the translation device63translates the second link22to the first link21, in parallel to a longitudinal direction of the first link21. The translation device63includes, for example, a linear-movement mechanism (not illustrated), such as a rail and a slide block, a rack and a pinion, balls and a screw, or a cylinder, and a servo motor M3(seeFIG.3) as an actuator. However, the configuration of the translation device63is not limited to the above. A proximal end part of the arm12is supported by the pedestal11so as to be ascendable and descendible, and turnable. Operation of a hoist unit61expands and contracts an elevatable shaft23coupled to the proximal end part of the arm12so that the arm12ascends and descends to the pedestal11. The hoist unit61includes, for example, a linear-movement mechanism (not illustrated) which expands and contracts the elevatable shaft23from/to the pedestal11, and a servo motor M1(seeFIG.3) as an actuator. Moreover, operation of a turning unit62turns the arm12with respect to the pedestal11centering on a turning axis R. The turning axis R of the arm12is substantially coaxial with the axial center of the elevatable shaft23. The turning unit62includes, for example, a gear mechanism (not illustrated) which rotates the first link21on the turning axis R, and a servo motor M2(seeFIG.3) as an actuator. However, the configurations of the hoist unit61and the turning unit62are not limited to the above. The hand13includes a base part31coupled to a distal end of the arm12, and a blade32fixed to the base part31. The blade32is a thin plate member having a Y-shape (a U-shape) where a tip-end part is divided into two. The principal surfaces of the blade32are horizontal, and a plurality of support pads33which support the substrate W are provided on the blade32. The plurality of support pads33are disposed so as to contact an edge of the substrate W placed on the blade32. Further, a pusher34is provided to the hand13on a base-end side of the blade32. The substrate W placed on the blade32is gripped between the pusher34and the support pad33disposed at the tip-end part of the blade32. Note that, although the hand13according to this embodiment conveys the substrate W while holding the substrate W in a horizontal posture, the hand13may hold the substrate W in a vertical posture. Moreover, although the method of holding the substrate W by the hand13according to this embodiment is an edge gripping type, other known methods of holding the substrate W, such as a suction type, a drop-in type, and a placement type, may also be adopted, instead of the edge gripping type. At least one set of photoelectric sensors4are provided to the hand13. The photoelectric sensor4is provided to the back surface of the tip-end part of the blade32, which is two-way forked. Referring toFIGS.1and4, the photoelectric sensor4includes a light emitter41provided to one of the two-way forked tip-end parts of the blade32, and a photodetector42provided to the other. The light emitter41and the photodetector42are separated in a direction parallel to the principal surfaces of the blade32(i.e., horizontal direction). The light emitter41is provided with a light source which emits light which is used as a detection medium. The photodetector42is provided with a photodetecting element which converts the emitted light from the light emitter41into an electrical signal, in response to receiving the light. The light emitter41and the photodetector42are opposed to each other, and the light emitted from the light emitter41travels linearly and enters into a light entrance window of the photodetector42. InFIG.4, an optical axis43of the light emitted from the light emitter41is illustrated by a chain line. When an object passes through the optical axis43and the photoelectric sensor4detects that an amount of light which enters into the photodetector42decreases, the photoelectric sensor4outputs a detection signal to the controller15. Configuration of Calibration Jig9 The calibration jig9according to this embodiment includes two target bodies91and92, and target moving devices93and94which individually move the target bodies91and92. The target bodies91and92are desirable to have a pin, a shaft, or a pillar shape extending vertically, in order to secure a window in the height to be detected by the photoelectric sensor4and to reduce a detected positional error. Note that the shapes of the target bodies91and92are not limited to these shapes. The target moving devices93and94may be arbitrary devices, as long as they move the target bodies91and92between an area which can be detected by the photoelectric sensor4(detectable area) and an area which is lower than the detectable area and is undetectable by the photoelectric sensor4(undetectable area). The target moving devices93and94each includes, for example, a linear-movement mechanism, such as a rack and a pinion, and an electric motor as an actuator of the linear-movement mechanism. Note that the target moving devices93and94are not limited to this configuration. The target moving devices93and94are communicatably connected to the controller15, and operations of the target moving devices93and94are controlled by a jig controlling module152of the controller15. In a plan view, a straight line passing through the two target bodies91and92is defined as a target reference line A2(seeFIG.2). A clearance Δd between the two target bodies91and92parallel to the target reference line A2is known, and is shorter than a length of the optical axis43. Configuration of Controller15 FIG.3is a view illustrating a configuration of a control system of the substrate transfer robot1. The controller15illustrated inFIG.3includes functional parts, such as a robot controlling module151which controls operation of the robot body10, the jig controlling module152, an optical-axis inclination detecting module153, an optical-axis positional deviation detecting module154, and a reference turning position acquiring module156. Moreover, the controller15has an optical-axis deviation memory155which stores information related to a deviation of the optical axis43. The controller15is a so-called computer, and, for example, it includes a processing unit (processor), such as a microcontroller, a CPU, an MPU, a PLC, a DSP, an ASIC, or an FPGA, and a storage device, such as a ROM and a RAM (none of them is illustrated). The storage device stores a program executed by the processing unit, various fixed data, etc. The program stored in the storage device includes a rotational-axis search program according to this embodiment. In addition, the storage device stores teaching data for controlling the operation of the arm12, data related to the shapes and dimensions of the arm12and the hand13, data related to the shape and dimension of the substrate W held by the hand13, etc. The controller15performs processing for implementing the functional parts described above by the processing unit reading and executing software, such as the program stored in the storage device. Note that the controller15may perform each processing by a centralized control with a sole computer, or may perform each processing by a distributed control with a collaboration of a plurality of computers. The controller15is connected to the servo motor M1for the hoist unit61of the arm12, the servo motor M2for the turning unit62, and the servo motor M3for the translation device63. The servo motors M1to M3are provided with position transducers E1to E3, respectively, which detect rotation angles of their output shafts, and detection signals of the position transducers E1to E3are outputted to the controller15. In addition, the pusher34of the hand13is connected to the controller15. Then, the robot controlling module151of the controller15calculates a target pose after a given control period of time based on a pose of the hand13identified from the rotational positions detected by the position transducers E1to E3(i.e., the position and the posture in the space), and the teaching data stored in the storage device, and operates the servo motors M1to M3so that the hand13becomes in the target pose after the given control period of time. Method of Detecting Deviation of Optical Axis Below, a method of detecting a deviation of the optical axis of the photoelectric sensor4provided to the hand13, which is performed by the substrate transfer robot1having the above configuration, is described. As illustrated inFIG.2, a hand axis A1and a hand center C are defined in the hand13. The hand axis A1is an axis extending linearly parallel to the longitudinal direction of the hand13, and the blade32is formed in a line symmetry with respect to the hand axis A1as a symmetry axis in the plan view. In this embodiment, the hand axis A1is parallel to the extending and contracting direction of the arm12, and the hand13moves in parallel to the hand axis A1. The hand center C is a vertical line of the hand13which overlaps with the center of the substrate W held by the hand13. The light emitter41and photodetector42of the photoelectric sensor4are precisely attached to the hand13. However, the actual optical axis43may be deviated from a designed ideal optical axis44due to a mounting error and individual specificities of the light emitter41and the photodetector42. The deviation of the optical axis43includes an inclination of the optical axis43from the ideal optical axis44, and a positional deviation of the optical axis43from the ideal optical axis44in the hand axis A1direction. Thus, in the substrate transfer robot1according to this embodiment, the optical-axis inclination detecting module153of the controller15detects the inclination of the optical axis43from the ideal optical axis44, the optical-axis positional deviation detecting module154of the controller15detects the positional deviation of the optical axis43from the ideal optical axis44in the direction parallel to the hand axis A1, and the robot controlling module151operates the robot body10in consideration of the deviation of the optical axis43. Method of Detecting Inclination of Optical Axis43 First, a method of detecting the inclination of the optical axis43from the ideal optical axis44is described.FIG.4is a view illustrating the method of detecting the inclination of the optical axis43, andFIG.5is a flowchart of a detection processing for the inclination of the optical axis43by the substrate transfer robot1. As illustrated inFIG.4, a certain horizontal direction is a first direction X, and a horizontal direction perpendicular to the first direction X is a second direction Y. The calibration jig9is positioned so that the target reference line A2becomes parallel to the first direction X. That is, a distance between the turning axis R of the hand13and the first target body91in the second direction Y is equal to a distance between the turning axis R and the second target body92in the second direction Y. As illustrated inFIG.5, the controller15first acquires a reference turning position of the hand13(Step S1). As illustrated inFIG.4, the hand13at a given standby position is located at the reference turning position where the extending direction of the ideal optical axis44is the first direction X and the extending direction of the hand axis A1is the second direction Y, and is separated from the target bodies91and92in the second direction Y. Note that, if a spatial relationship between the calibration jig9and the robot body10is known, the reference turning position of the hand13is known. The reference turning position acquiring module156acquires the reference turning position stored or taught in advance, and the robot controlling module151operates the robot body10so that the hand13is located at the reference turning position. Next, the controller15performs a first search processing (Step S2). In the first search processing, the controller15operates the robot body10so that the hand13moves in the radial direction centering on the turning axis R, at the reference turning position, until the first target body91is detected by the photoelectric sensor4. Here, the hand13moves in the second direction Y. During the first search processing, the controller15operates the target moving devices93and94so that the first target body91is located in the detectable area and the second target body92is located in the undetectable area. The controller15calculates a position of the hand13in the second direction Y when the first target body91is detected by the first search processing, and stores it as a first detected position (Step S3). The controller15operates the robot body10so that the hand13moves to the standby position. Next, the controller15performs a second search processing (Step S4). In the second search processing, the controller15operates the robot body10so that the hand13moves in the radial direction centering on the turning axis R, at the reference turning position, until the second target body92is detected by the photoelectric sensor4. Here, the hand13moves in the second direction Y. During the second search processing, the controller15operates the target moving devices93and94so that the first target body91is located in the undetectable area and the second target body92is located in the detectable area. The controller15calculates a position of the hand13in the second direction Y when the second target body92is detected by the second search processing, and stores it as a second detected position (Step S5). The controller15operates the robot body10so that the hand13moves to the standby position. Then, the controller15calculates an inclination angle α of the optical axis43from the ideal optical axis44based on the first detected position and the second detected position (Step S6). A relation of the following formula can be established between the inclination angle α of the optical axis43, a difference ΔL (not illustrated) between the first detected position and the second detected position, and the clearance Δd between the target bodies91and92. tan α=ΔL/Δd Thus, the controller15calculates the difference ΔL between the first detected position and the second detected position, and calculates the inclination angle α of the optical axis43from the ideal optical axis44based on the difference ΔL and the clearance Δd between the target bodies91and92by using the above formula. Here, the controller15may determine that there is no inclination of the optical axis43if the inclination angle α is substantially 0, and may determine that there is an inclination of the optical axis43(i.e., an inclination of the optical axis43is detected) if the inclination angle α is not substantially 0. Note that the phrase “substantially 0” as used herein is not limited to exactly 0, but may include a value within a given adjustable range on the plus side and the minus side of 0. The controller15stores the calculated inclination angle α in the optical-axis deviation memory155(Step S7), and then ends this processing. Method of Detecting Positional Deviation of Optical Axis43 Then, a method of detecting the positional deviation of the optical axis43from the ideal optical axis44in the direction parallel to the hand axis A1is described.FIGS.6and7are views illustrating the method of detecting the positional deviation of the optical axis43, andFIG.8is a flowchart of a detection processing for the positional deviation of the optical axis43by the substrate transfer robot1. The hand13of the robot body10holds the substrate W, and is located at a position evacuated in the second direction Y from an arbitrary substrate placing part in a posture in which the extending direction of the ideal optical axis44is the first direction X and the extending direction of the hand axis A1is the second direction Y. The substrate placing part could be anything as long as the substrate W can be placed thereon, such as a substrate boat, a substrate carrier, a substrate tray, a stage of a substrate processing device, and an aligner. As illustrated inFIG.8, the controller15first operates the robot body10so that the substrate W is transferred to the substrate placing part from the hand13(Step S11). Here, the controller15expands the arm12to advance the hand13in the second direction Y so that the blade32moves above the substrate placing part, and then lowers the arm12to move the blade32below the substrate placing part. Therefore, the substrate W is transferred to the substrate placing part from the hand13. Here, the position of the hand13in the second direction Y when the substrate W is placed on the substrate placing part is a “substrate placement position.” As illustrated inFIG.6, a distance L1from the hand center C to the ideal optical axis44in the direction parallel to the hand axis A1and a radius φW of the substrate W are both known values, and are stored in the controller15in advance. As illustrated inFIGS.6and7, the controller15operates the robot body10so that the hand13retreats in the second direction Y by the sum of the distance L1and the radius φW (L1+φW) from the substrate placement position (Step S12). The controller15stores the position of the hand13in the second direction Y as a reference position (Step S13). Then, the controller15turns ON the photoelectric sensor4, and operates the robot body10so that the hand13elevates from the reference position until the ideal optical axis44moves from below the substrate W to above the substrate W (Step S14). If the substrate W is not detected (NO at Step S15), the controller15determines that the optical axis43is deviated to the hand center C side from the ideal optical axis44(Step S16). If the substrate W is detected (YES at Step S15), the controller15determines that the optical axis43is in agreement with the ideal optical axis44or is deviated to the opposite side of the hand center C (Step S17). The controller15operates the robot body10based on the decision results of Steps S16and S17so that the photoelectric sensor4searches for the edge of the substrate W while advancing or retreating the hand13in the second direction Y (Step S18). Here, if determined that the optical axis43is deviated to the hand center C side from the ideal optical axis44at Step S16, a small amount of extension of the arm12and the ascending and descending of the arm12are repeated upon searching for the edge of the substrate W, while the photoelectric sensor4is ON. On the other hand, if determined at Step S17that the optical axis43is in agreement with the ideal optical axis44or is deviated to the opposite side of the hand center C, a small amount of contraction of the arm12and the ascending and descending of the arm12are repeated upon the search for the edge of the substrate W, while the photoelectric sensor4is ON. The controller15stores, as the detected position, the position of the hand13in the second direction Y when the edge of the substrate W is detected (Step S19). The controller15calculates a difference between the reference position and the detected position as an amount of positional deviation of the optical axis43from the ideal optical axis44in the direction parallel to the hand axis A1(Step S20). Here, the controller15may determine that there is no positional deviation of the optical axis43, if the amount of positional deviation is substantially 0, and it may determine there is a positional deviation of the optical axis43(i.e., the positional deviation of the optical axis43is detected), if the amount of positional deviation is not substantially 0. The controller15stores the amount of positional deviation in the optical-axis deviation memory155(Step S21), and then ends this processing. The controller15uses the inclination angle α of the optical axis43and the amount of positional deviation of the optical axis43which are stored in the optical-axis deviation memory155, for the control of operation of the robot body10. That is, the controller15generates an operating command of the robot body10so that the inclination angle α of the optical axis43and the amount of positional deviation of the optical axis43are calibrated. As described above, the substrate transfer robot1of this embodiment includes the robot body10having the hand13which holds the substrate W, the arm12which includes the turning axis R of the hand13and displaces the hand13, and the photoelectric sensor4provided to the tip-end parts of the hand13, the calibration jig9having the first target body91and the second target body92, and the controller15. The calibration jig9according to this embodiment further includes the first target moving device93which moves the first target body91to the undetectable range of the photoelectric sensor4from the detectable range, and the second target moving device94which moves the second target body92to the undetectable range of the photoelectric sensor4from the detectable range. The first target body91and the second target body92are lined up in the first direction X, and are in the plane perpendicular to the second direction Y. Moreover, the distance between the turning axis R and the first target body91in the second direction Y is equal to the distance between the turning axis R and the second target body92in the second direction Y. The ideal optical axis44is defined in the hand13. The controller15includes the robot controlling module151, the jig controlling module152, and the optical-axis inclination detecting module153. The robot controlling module151acquires the reference turning position of the hand13where the turning axis R is the turning center so that the ideal optical axis44extends in the certain horizontal first direction X, and operates the robot body10so that the first search is performed for detecting the first target body91by the photoelectric sensor4while moving the hand13in the radial direction centering on the turning axis R, at the given first turning position (in this embodiment, the reference turning position) with respect to the reference turning position, and the second search is performed for detecting the second target body92by the photoelectric sensor4while moving the hand13in the radial direction centering on the turning axis R, at the given second turning position (in this embodiment, the reference turning position) with respect to the reference turning position. The jig controlling module152operates the first target moving device93and the second target moving device94so that, during the first search, the first target body91is in the detectable range and the second target body92is in the undetectable range, and during the second search, the first target body91is in the undetectable range and the second target body92is in the detectable range. The optical-axis inclination detecting module153calculates, as the first detected position, the position of the hand13in the horizontal second direction Y perpendicular to the first direction X when the first target body91is detected by the first search, calculates, as the second detected position, the position of the hand13in the second direction Y when the second target body92is detected by the second search, and detects the inclination of the optical axis43from the ideal optical axis44based on the difference between the first detected position and the second detected position. In the above, on the hand13, the intersecting position of the optical axis43and the first target body91and the intersecting position of the optical axis43and the second target body92are different. Similarly, the method of detecting the deviation of the optical axis of the substrate hold hand according to this embodiment includes: acquiring the reference turning position of the hand13where the turning axis R is the turning center so that the ideal optical axis44extends in the certain horizontal first direction X; performing a series of first search processings including detecting, by the photoelectric sensor4, the first target body91, while moving the hand13in the radial direction centering on the turning axis R, at the given first turning position (in this embodiment, the reference turning position) with respect to the reference turning position, and calculating, as the first detected position, the position of the hand13in the horizontal second direction Y perpendicular to the first direction X when the first target body91is detected; performing a series of second search processings including detecting, by the photoelectric sensor4, the second target body92, while moving the hand13in the radial direction centering on the turning axis R, at the given second turning position (in this embodiment, the reference turning position) with respect to the reference turning position, and calculating, as the second detected position, the position of the hand13in the second direction Y when the second target body92is detected; and detecting the inclination of the optical axis43from the ideal optical axis44based on the difference between the first detected position and the second detected position. In this method of detecting the deviation of the optical-axis, the distance between the turning axis R and the first target body91in the second direction Y is equal to the distance between the turning axis R and the second target body92in the second direction Y. On the hand13, the intersecting position of the optical axis43and the first target body91is different from the intersecting position of the optical axis43and the second target body92. According to the substrate transfer robot1and the method of detecting the deviation of the optical-axis, the deviation (inclination) of the optical axis43can be detected by using the simple calculation based on the difference between the two positions (i.e., the first detected position and the second detected position) at which the target bodies91and92are detected. In addition, since, in both the first search and the second search, the target bodies91and92are searched while moving the hand13in the second direction Y along with the optical axis43, the operational error of the robot body10caused by a backlash etc. and the detected position errors on the target bodies91and92can be eliminated. Therefore, the deviation (inclination) of the optical axis43can be calculated more securely. Moreover, in the substrate transfer robot1according to this embodiment, the robot controlling module151of the controller15operates the robot body10so that the hand13advances to the substrate placement position in the horizontal second direction Y perpendicular to the first direction X in the posture in which the ideal optical axis44extends in the first direction X, the substrate W is transferred to the substrate placing part from the hand13, and the hand13is moved to the position retreated from the substrate placement position in the second direction Y by the distance obtained by adding the radius φW of the substrate W to the distance L1between the hand center C and the ideal optical axis44, and the photoelectric sensor4searches for the edge of the substrate W while advancing and retreating the hand13in the second direction Y. The controller15includes the optical-axis positional deviation detecting module154which calculates, as the reference position, the position of the hand13in the second direction Y when the hand13is retreated, calculates, as the detected position (third detected position), the position of the hand13in the second direction Y when the edge of the substrate W is detected by the search, and detects the positional deviation of the optical axis43from the ideal optical axis44based on the difference between the reference position and the detected position. Similarly, the method of detecting the deviation of the optical axis of the substrate hold hand includes: moving the hand13to the given substrate placement position by advancing the hand13in the horizontal second direction Y perpendicular to the first direction X in the posture in which the ideal optical axis44extends in the horizontal first direction X to transfer the substrate W from the hand13to the substrate placing part provided at the substrate placement position; defining, as the hand center C, the position of the hand13overlapping in the vertical direction with the center of the substrate W held by the hand13, retreating the hand13in the second direction Y from the substrate placement position by the distance obtained by adding the radius φW of the substrate W to the distance L1between the hand center C and the ideal optical axis44, and calculating as the reference position, the position of the hand13in the second direction Y; searching for the edge of the substrate W while advancing and retreating the hand13in the second direction Y, and calculating, as the detected position (third detected position), the position of the hand13in the second direction Y when the photoelectric sensor4detects the edge of the substrate W by the search; and detecting the positional deviation of the optical axis43from the ideal optical axis44based on the difference between the reference position and the detected position. According to the substrate transfer robot1and the method of detecting the deviation of the optical-axis, the positional deviation of the optical axis43from the ideal optical axis44can be detected, without using a special jig. MODIFICATION 1 Modification 1 of the above embodiment is described. A substrate transfer robot1A according to Modification 1 is different in a configuration of a robot body10A from the robot body10according to the above embodiment, and a configuration of a calibration jig9A is also different from the calibration jig9according to the above embodiment.FIG.9is a plan view illustrating the substrate transfer robot1A according to Modification 1, andFIG.10is a view illustrating a configuration of a control system of the substrate transfer robot1A illustrated inFIG.9. In description of Modification 1, the same reference characters are given in the drawings to members same as or similar to the above embodiment to omit description. The robot body10A illustrated inFIGS.9and10is different in that a carriage11A is provided instead of the pedestal11as compared with the robot body10according to the above embodiment, and other configurations are substantially the same. The carriage11A is provided with a propelling device64controlled by the controller15, and the carriage11A travels on rails18constructed on a floor by operation of the propelling device64. The propelling device64includes, for example, a slide block which slides on the rails18, a pinion which meshes with a rack provided to the rails18, and an electric motor which rotates the pinion (none of them is illustrated). Note that the configuration of the propelling device64is not limited to this. The calibration jig9A corresponding to the robot body10A is provided with a pin-shaped target body90. That is, the target body90corresponds to a combination of the first target body91and the second target body92according to the above embodiment. Note that the target body90may not move like the target bodies91and92. The detection processing for the positional deviation of the optical axis43performed by the substrate transfer robot1A according to Modification 1 is substantially the same as the above embodiment. The detection processing for the inclination of the optical axis43performed by the substrate transfer robot1A according to Modification 1 is slightly different from the above embodiment.FIG.11is a view illustrating the detection processing for the inclination of the optical axis43by the substrate transfer robot1A according to Modification 1. As illustrated inFIG.11, upon starting the detection of the inclination of the optical axis43, the robot body10A is positioned so that a traveling direction A4of the carriage11A (i.e., the extending direction of the rails18) becomes parallel to the first direction X. The hand13of the robot body10A is located at a given standby position distant from the target body90in the second direction Y in a posture in which the extending direction of the ideal optical axis44is the first direction X, and the extending direction of the hand axis A1is the second direction Y. Then, the controller15performs the processings from Steps S1to S7described above (seeFIG.5). Note that the concrete content of the second search processing at Step S3and the concrete content of processing for calculating the inclination angle α of the optical axis43at Step S5differ from the above embodiment. As illustrated inFIG.11, in the second search processing (Step S4), the controller15causes the propelling device64to move the carriage11A by a given traveling distance Δd′ in the first direction X, and causes the robot body10A to search for the target body90which is the same as the first search processing. Moreover, in the processing for calculating the inclination angle α of the optical axis43(Step S6), the controller15uses the traveling distance Δd′ instead of the clearance Δd between the target bodies91and92. As described above, the substrate transfer robot1A according to Modification 1 includes the robot body10A having the hand13which holds the substrate W, the arm12which displaces the hand13, and the transmission type photoelectric sensor4provided to the tip-end parts of the hand13, the target body90, and the controller15. The ideal optical axis44is defined in the hand13. The controller15includes the robot controlling module151and the optical-axis inclination detecting module153. Similar to the above embodiment, the robot controlling module151acquires the reference turning position, and operates the robot body10A so that the robot body10A performs the first search and the second search. Note that the first turning position and the second turning position are both the reference turning position, and the robot controlling module151operates the robot body10A so that the hand13is shifted in the first direction X after the first search, and the second search is then performed. Moreover, the method of detecting the deviation of the optical axis of the substrate hold hand by the substrate transfer robot1A according to Modification 1 includes, similar to the above embodiment, acquiring the reference turning position, performing the series of first search processings, performing the series of second search processings, and detecting the inclination of the optical axis43from the ideal optical axis44. Note that the first turning position and the second turning position are both the reference turning position, and a locus of the hand13during the first search processing separates from a locus of the hand13during the second search processing in the first direction Y. MODIFICATION 2 Modification 2 of the above embodiment is described.FIG.12is a plan view of a substrate transfer robot1B according to Modification 2. In description of Modification 2, the same reference characters are given in the drawings to members same as or similar to the above embodiment to omit description. As illustrated inFIG.12, in the substrate transfer robot1B according to Modification 2, the configuration of the robot body10is the same as that of the robot body10according to the above embodiment, and the configuration of the calibration jig9A used for the detection of the inclination of the optical axis43is different from the calibration jig9according to the above embodiment. The calibration jig9A of the substrate transfer robot1B according to Modification 2 has substantially the same configuration as the calibration jig9A used in Modification 1. The detection processing for the positional deviation of the optical axis43performed by the substrate transfer robot1B according to Modification 2 is substantially the same as the above embodiment. The detection processing for the inclination of the optical axis43performed by the substrate transfer robot1B according to Modification 2 is slightly different from the above embodiment.FIG.13is a view illustrating the detection processing for the inclination of the optical axis43by the substrate transfer robot1B according to Modification 2. In the detection processing for the inclination of the optical axis43by the controller15, the processings of Steps S1to S7described above (seeFIG.5) are performed. However, the concrete content of the first search processing at Step S2, the concrete content of the second search processing at Step S4, and the concrete content of processing for calculating the inclination angle α of the optical axis43at Step S6differ from the above embodiment. As illustrated inFIG.13, a vertical plane including the turning axis R of the hand13and the target body90extending in parallel to the turning axis R is defined as a reference vertical plane A3. Then, the position of the hand13at which the hand axis A1exists in the reference vertical plane A3(i.e., the hand axis A1overlaps with the reference vertical plane A3in the plan view) is used as the reference turning position. In the hand13located at the reference turning position, the reference vertical plane A3is perpendicular to the ideal optical axis44. In the first search processing (Step S2), the controller15first turns the arm12centering on the turning axis R by a minute angle (−β) from the reference turning position (seeFIG.14). Thus, the hand13is located at the first turning position (−β) where the reference vertical plane A3intersects with the ideal optical axis44, and then, by expanding and contracting the arm12, the photoelectric sensor4searches for the target body90while moving the hand13in the radial direction centering on the turning axis R. Note that, when the hand13is located at the first detected position, a distance d5between the hand axis A1and the target body90in the plan view can be calculated based on the distance from the target body90to the turning axis R, the turning position, and the distance from the turning axis R to the ideal optical axis44. In the second search processing (Step S4), the controller15first turns the arm12centering on the turning axis R by a given minute angle (+2β) (seeFIG.15). Thus, the hand13is located at the second turning position (+β) which is plane symmetrical to the first turning position (−β) with respect to the reference vertical plane A3, and then, by expanding and contracting the arm12, the photoelectric sensor4searches for the target body90while moving the hand13in the radial direction centering on the turning axis R. Note that, when the hand13is located at the second detected position, a distance d6between the hand axis A1and the target body90in the plan view can be calculated based on the distance from the target body90to the turning axis R, the turning position, and the distance from the turning axis R to the ideal optical axis44. The distance d5and the distance d6become ideally identical to each other. In the processing for calculating the inclination angle α of the optical axis43(Step S6), the controller15calculates the inclination angle α of the optical axis43from the ideal optical axis44based on the sum of the distance d5and the distance d6, and the difference between the first detected position and the second detected position. As described above, the substrate transfer robot1B according to Modification 2 includes the robot body10having the hand13which holds the substrate W, the arm12which displaces the hand13, and the transmission type photoelectric sensor4provided to the tip-end parts of the hand13, the target body90, and the controller15. The controller15includes the robot controlling module151and the optical-axis inclination detecting module153. Similarly to the above embodiment, the robot controlling module151acquires the reference turning position, and operates the robot body10so as to perform the first search and the second search. Note that the first turning position is a position turned from the reference turning position by a given angle in one of the turning directions, and the second turning position is a position turned from the reference turning position by a given angle in the other turning direction. Moreover, the method of detecting the deviation of the optical axis of the substrate hold hand by the substrate transfer robot1B according to Modification 2 includes, similarly to the above embodiment, acquiring the reference turning position, performing the series of first search processings, performing the series of second search processings, and detecting the inclination of the optical axis43from the ideal optical axis44. Note that the first turning position is a position turned from the reference turning position by a given angle in one of the turning directions, and the second turning position is a position turned from the reference turning position by a given angle in the other turning direction. MODIFICATION 3 Modification 3 of the above embodiment is described. Although in the substrate transfer robot1according to the above embodiment the reference turning position which is stored or is taught in advance is used, the reference turning position may be automatically taught to the substrate transfer robot1. Thus, in a substrate transfer robot1C according to Modification 3, the reference turning position is automatically taught by using a calibration jig9C which also has a function as a positioning jig for positioning the reference turning position. In description of Modification 3, the same reference characters are given in the drawings to members same as or similar to the above embodiment to omit description. FIG.16is a side view illustrating a configuration of the calibration jig9C. The calibration jig9C illustrated inFIG.16includes upper and lower supporting plates98uand98b, the pin-like target body90suspended from the upper supporting plate98u, and first and second object detection sensors95and96. The upper and lower supporting plates98uand98bare coupled to each other through a pillar (not illustrated). The first and second object detection sensors95and96are contactless sensors which detect an object entering between the upper supporting plate98uand the lower supporting plate98b. The first and second object detection sensors95and96are connected to the controller15, and when the object is detected by these object detection sensors95and96, a detection signal is transmitted to the controller15. The first and second object detection sensors95and96may be, for example, retroreflective type photoelectric sensors. In this case, light emitting/receiving devices are provided to the upper supporting plate98uas the first and the second object detection sensors95and96, and reflectors which reflect the light emitted from the light emitting/receiving devices are provided to the lower supporting plate98b. The target body90is disposed between a first sensor axis95cof the first object detection sensor95and a second sensor axis96cof the second object detection sensor96. In the plan view, the first sensor axis95cof the first object detection sensor95, the target body90and the second sensor axis96cof the second object detection sensor96are located on one straight line A5(seeFIG.18). The distance between the first sensor axis95cof the first object detection sensor95and the target body90is equal to the distance between the second sensor axis96cof the second object detection sensor96and the target body90. Moreover, the clearance between the first sensor axis95cof the first object detection sensor95and the second sensor axis96cof the second object detection sensor96is slightly larger than a dimension of the hand13in a direction perpendicular to the hand axis A1(i.e., a dimension of the hand13in the width direction). Next, a method of automatically teaching the reference turning position to the substrate transfer robot1C by using the calibration jig9C having the above configuration, is described. Note that the automatic teach processing of the reference turning position is mainly performed by the reference turning position acquiring module156of the controller15.FIG.17is a flowchart of the automatic teach processing of the reference turning position of the substrate transfer robot1C according to Modification 3.FIGS.18to20are views illustrating the automatic teach processing of the reference turning position by the substrate transfer robot1C according to Modification 3. As illustrated inFIG.18, the controller15first brings the tip-end parts of the hand13into a space between the first sensor axis95cof the first object detection sensor95and the second sensor axis96cof the second object detection sensor96(Step S31). Here, for facilitating the description, one of the tip-end parts of the hand13branched into two is referred to as a “first tip-end part,” and the other is referred to as a “second tip-end part.” In the hand13, the ideal optical axis44is defined so as to connect the first tip-end part and the second tip-end part, and the hand axis A1perpendicular to the ideal optical axis44in the plane parallel to the substrate placing surface of the hand13is defined. Next, as illustrated inFIG.19, the controller15operates the robot body10so that the hand13moves until the first tip-end part of the hand13is detected by the first object detection sensor95(Step S32). The controller15obtains plane coordinates of the first tip-end part when the first tip-end part of the hand13is detected by the first object detection sensor95as first sensor coordinates, and stores it (Step S33). Note that the controller15can obtain the plane coordinates of the first tip-end part of the hand13in a robot coordinate system based on the position of the hand13and the known shape of the hand13. Next, as illustrated inFIG.20, the controller15operates the robot body10so that the hand13moves until the second tip-end part of the hand13is detected by the second object detection sensor96(Step S34). The controller15obtains plane coordinates of the second tip-end part when the second tip-end part of the hand13is detected by the second object detection sensor96as second sensor coordinates, and stores it (Step S35). The controller15can obtain the plane coordinates of the second tip-end part of the hand13in the robot coordinate system based on the position of the hand13and the known shape of the hand13. The controller15calculates an intermediate position of the first sensor coordinates and the second sensor coordinates as target coordinates (Step S36). A vertical plane including a straight line which connects the target coordinates and the plane coordinates of the turning axis R is defined as the reference vertical plane A3. The controller15operates the robot body10so that the hand13turns to a position where the hand axis A1is within the reference vertical plane A3(Step S37). Then, the controller15acquires the turning position of the hand13when the hand axis A1is within the reference vertical plane A3as a reference turning position (Step S38). As described above, after the reference turning position is taught to the substrate transfer robot1C, the optical-axis deviation detection processing is performed like the above embodiment (or Modification 1 or 2). In this optical-axis deviation detection processing, the first object detection sensor95and the second object detection sensor96of the calibration jig9C are turned OFF. As described above, in the method of detecting the deviation of the optical axis of the substrate hold hand according to Modification 3, the acquiring the reference turning position includes bringing the first tip-end part and the second tip-end part of the hand13into the space between the first sensor axis95cand the second sensor axis96c; moving the hand13until the first tip-end part is detected by the first object detection sensor95and obtaining the plane coordinates of the first tip-end part when the first tip-end part is detected by the first object detection sensor95as the first sensor coordinates; moving the hand until the second tip-end part is detected by the second object detection sensor96and obtaining the plane coordinates of the second tip-end part when the second tip-end part is detected by the second object detection sensor96as the second sensor coordinates; moving the hand13until the straight line which connects the intermediate coordinates of the first sensor coordinates and the second sensor coordinates to the plane coordinates of the turning axis R, and the hand axis A1enter into the same vertical plane A3; and acquiring the turning position of the hand13as the reference turning position. Similarly, the substrate transfer robot1C according to Modification 3 includes the robot body10having the hand13which holds the substrate W, the arm12which displaces the hand13, and the transmission type photoelectric sensor4provided to the tip-end parts of the hand13, the calibration jig9C, and the controller15. The calibration jig9C has the first object detection sensor95and the second object detection sensor96. The first sensor axis95cof the first object detection sensor95, the target body90(the first target body91or the second target body92), and the second sensor axis96cof the second object detection sensor96are lined up in this order on the same straight line at equal intervals in the plan view. The controller15includes the robot controlling module151and the reference turning position acquiring module156. The robot controlling module151operates the robot body10so that the first tip-end part and the second tip-end part of the hand13enters into the space between the first sensor axis95cand the second sensor axis96c, the hand13moves until the first tip-end part is detected by the first object detection sensor95, the hand13moves until the second tip-end part is detected by the second object detection sensor96, and the hand13moves until the straight line which connects the target coordinates which are the plane coordinates of the first target body or the second target body to the plane coordinates of the turning axis R, and the hand axis A1enter into the same reference vertical plane A3. The reference turning position acquiring module156obtains the plane coordinates of the first tip-end part when the first tip-end part of the hand13is detected by the first object detection sensor95as the first sensor coordinates, obtains the plane coordinates of the second tip-end part when the second tip-end part of the hand13is detected by the second object detection sensor as the second sensor coordinates, obtains the intermediate coordinates of the first sensor coordinates and the second sensor coordinates as the target coordinates, and acquires the turning position of the hand13when the hand axis A1is within the reference vertical plane A3as the reference turning position. Although the suitable embodiments (and modifications) of the present disclosure are described above, what changed the concrete configurations and/or the details of the functions of the above embodiments without departing from the spirit of the present disclosure may be encompassed within the present disclosure. DESCRIPTION OF REFERENCE CHARACTERS 1,1A,1B,1C: Substrate Transfer Robot4: Photoelectric Sensor9,9A,9C: Calibration Jig10: Robot Body11: Pedestal11A: Carriage12: Arm13: Hand (Substrate Hold Hand)15: Controller18: Rails21: First Link22: Second Link23: Elevatable Shaft31: Base Part32: Blade33: Support Pad34: Pusher41: Light Emitter42: Photodetector43: Optical Axis44: Ideal Optical Axis61: Hoist Unit62: Turning Unit63: Translation Device64: Propelling Device90,91,92: Target Body93,94: Target Moving Device95,96: Object Detection Sensor98u,98b: Supporting Plate151: Robot Controlling Module152: Jig Controlling Module153: Optical-axis Inclination Detecting Module154: Optical-axis Positional Deviation Detecting Module155: Optical-axis Deviation Memory156: Reference Turning Position Acquiring Module | 53,673 |
11858148 | DETAILED DESCRIPTION Artificial intelligence refers to a field of studying artificial intelligence or methodology for making artificial intelligence, and machine learning refers to a field of defining various issues dealt with in the field of artificial intelligence and studying methodology for solving the various issues. The machine learning is defined as an algorithm that enhances the performance of a certain task through steady experience with the certain task. An artificial neural network (ANN) is a model used in the machine learning and may mean all of the models which have a problem-solving ability and are composed of artificial neurons (nodes) that form a network by synaptic connection. The artificial neural network may be defined by a connection pattern between neurons of different layers, a learning process for updating model parameters, and an activation function for generating an output value. The artificial neural network may include an input layer and an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include a synapse that connects a neuron to a neuron. Each neuron in the artificial neural network may output a function value of an activation function for input signals, a weight, and a bias input through the synapse. The model parameter means a parameter determined by learning and includes the weight of the synaptic connection and bias of the neuron, etc. In addition, a hyper parameter means a parameter to be set before learning in a machine learning algorithm, and includes a learning rate, the number of times of the repetition, a mini batch size, an initialization function, and the like. The purpose of the learning of the artificial neural network is regarded as determining a model parameter that minimizes a loss function. The loss function may be used as an index for determining an optimal model parameter in the learning process of the artificial neural network. The machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning based on a learning method. The supervised learning may refer to a method of training the artificial neural network in a state in which a label for learning data is given. The label may mean a correct answer (or a result value) that the artificial neural network must infer when the learning data is input to the artificial neural network. The unsupervised learning may refer to a method of training the artificial neural network in a state in which a label for learning data is not given. The reinforcement learning may refer to a learning method of training an agent defined in a certain environment to select a behavior or a behavior sequence that maximizes the cumulative reward in each state. Machine learning, which is implemented by a deep neural network (DNN) including a plurality of hidden layers of the artificial neural networks, is called deep learning, and the deep learning is part of the machine learning. Hereinafter, the machine learning is used as a meaning including the deep running. A robot may refer to a machine that automatically processes or operates a given task by its own ability. In particular, a robot having a function of recognizing an environment, of making a self-determination, and of performing operation may be referred to as an intelligent robot. Robots may be classified into industrial robots, medical robots, home robots, military robots, and the like according to the use purpose or field. The robot can be equipped with a manipulator including an actuator or a motor and can perform various physical operations such as moving a robot joint. In addition, a movable robot may include a wheel, a brake, a propeller, and the like and may travel on the ground or fly in the air. Autonomous driving refers to a technology enabling a vehicle to travel on its own accord. An autonomous vehicle refers to a vehicle that travels without a user's operation or with a minimum manipulation of the user. For example, the autonomous driving may include a technology for maintaining a lane while driving, a technology for automatically controlling a speed, such as adaptive cruise control, a technique for automatically traveling along a predetermined route, and a technology for automatically setting and traveling a route when a destination is set. The vehicle may include a vehicle having only an internal combustion engine, a hybrid vehicle having both an internal combustion engine and an electric motor, and an electric vehicle having only an electric motor, and may include not only an automobile but also a train, a motorcycle, and the like. Here, the autonomous vehicle may be regarded as a robot having an autonomous driving function. Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are collectively referred to as extended reality. The VR technology provides a real-world object and background only in the form of a CG image, the AR technology provides a virtual CG image on a real object image, and the MR technology is a computer graphic technology that mixes and combines virtual objects into the real world. The MR technology is similar to the AR technology in that the real object and the virtual object are shown together. However, in the AR technology, the virtual object is used in the form that complements the real object, whereas in the MR technology, the virtual object and the real object are used in an equal manner. An XR technology may be applied to a head-mount display (HMD), a head-up display (HUD), a mobile phone, a tablet PC, a laptop computer, a desktop computer, a TV, a digital signage, and the like. A device to which the XR technology is applied may be referred to as an XR device. FIG.1shows an AI device according to an embodiment of the present invention. The AI device100may be implemented by a stationary device or a mobile device, such as a TV, a projector, a mobile phone, a smartphone, a desktop computer, a notebook, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a tablet PC, a wearable device, a set-top box (STB), a DMB receiver, a radio, a washing machine, a refrigerator, a desktop computer, a digital signage, a robot, a vehicle, and the like. Referring toFIG.1, the AI device100may include a communication circuit110, an input device120, a learning processor130, a sensor140, an output device150, a memory170, and a processor180. The communication circuit110may transmit and receive data to and from external devices such as other AI devices100ato100eor an AI server200by using wire/wireless communication technology. For example, the communication circuit110may transmit and receive sensor information, a user input, a learning model, and a control signal, etc., to and from external devices. Here, the communication technology used by the communication circuit110includes Global System for Mobile communication (GSM), Code Division Multi Access (CDMA), Long Term Evolution (LTE), fifth generation communication (5G), Wireless LAN (WLAN), Wireless Fidelity (Wi-Fi), Bluetooth™, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), ZigBee, Near Field Communication (NFC), and the like. The input device120may obtain various types of data. Here, the input device120may include a camera for inputting an image signal, a microphone for receiving an audio signal, and a user input unit for receiving information from a user. Here, the camera or the microphone may be treated as a sensor, and the signal obtained from the camera or the microphone may be referred to as sensing data or sensor information. The input device120may obtain a learning data for model learning and an input data, etc., to be used when an output is obtained by using the learning model. The input device120may obtain raw input data. In this case, the processor180or the learning processor130may extract an input feature by preprocessing the input data. The learning processor130may train a model composed of the artificial neural networks by using the learning data. Here, the trained artificial neural network may be referred to as a learning model. The learning model may be used to infer a result value for a new input data instead of the learning data, and the inferred value may be used as a basis for determination to perform a certain operation. Here, the learning processor130may perform AI processing together with a learning processor240of the AI server200. Here, the learning processor130may include a memory integrated or implemented in the AI device100. Alternatively, the learning processor130may be implemented by using the memory170, an external memory directly coupled to the AI device100, or a memory maintained in an external device. The sensor140may obtain at least one of information on the inside of the AI device100, information on ambient environment of the AI device100, and user information. Here, the sensor140may be composed in one or more combinations of a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint sensor, an ultrasonic sensor, an optical sensor, a microphone, a lidar, and a radar, etc. The output device150may generate an output related to a visual sense, an auditory sense, or a tactile sense. Here, the output device150may include a display for visually outputting information, a speaker for acoustically outputting information, and a haptic actuator for tactually outputting information. For example, the display may output images or videos, the speaker may output voice or sound, and the haptic actuator may cause vibration. The memory170may store data that supports various functions of the AI device100. For example, the memory170may store input data obtained by the input device120, learning data, a learning model, a learning history, etc. The memory170may include at least one of a flash memory type memory, a hard disk type memory, a multimedia card micro type memory, a card type memory (for example, SD or XD memory, etc.), a magnetic memory, a magnetic disk, an optical disk, a random access memory (RAM), a static random access memory (SRAM), a read-only memory (ROM), a programmable read-only memory (PROM), an electrically erasable programmable read only memory (EPMROM). The processor180may determine at least one executable operation of the AI device100based on information that is determined or generated by using a data analysis algorithm or the machine learning algorithm. The processor180may control the components of the AI device100and perform the determined operation. To this end, the processor180may request, search, receive, or utilize data of the learning processor130or the memory170. The processor180may control the components of the AI device100such that operations which are predicted or are determined to be desirable among the at least one executable operation are performed. Here, when the processor180needs to be associated with an external device in order to perform the determined operation, the processor180may generate a control signal for controlling the corresponding external device and transmit the generated control signal to the corresponding external device. The processor180may obtain intention information for the user input and may determine user's requirements based on the obtained intention information. Here, the processor180may obtain intention information corresponding to the user input by using at least one of a speech to text (STT) engine for converting voice input into a text string or a natural language processing (NLP) engine for obtaining intention information of a natural language. Here, at least a portion of at least one of the STT engine or the NLP engine may be composed of an artificial neural network trained according to the machine learning algorithm. At least one of the STT engine or the NLP engine may be trained by the learning processor130, may be trained by the learning processor240of the AI server200, or may be trained by their distributed processing. The processor180may collect history information including operation contents of the AI device100or a user's feedback on the operation, and the like, and store the history information in the memory170or in the learning processor130, or transmit the history information to the external device such as the AI server200, etc. The collected history information may be used to update the learning model. The processor180may control at least some of the components of the electronic device100in order to execute an application program stored in the memory170. In addition, the processor180may operate two or more of the components included in the AI device100in combination with each other in order to execute the application program. FIG.2shows the AI server according to the embodiment of the present invention. Referring toFIG.2, the AI server200may mean a device which trains the artificial neural network by using the machine learning algorithm or mean a device which uses the trained artificial neural network. Here, the AI server200may be composed of a plurality of servers to perform distributed processing or may be defined as a 5G network. Here, the AI server200may be included as a component of the AI device100, and may perform at least a portion of the AI processing together. The AI server200may include a communication circuit210, a memory230, the learning processor240, a processor260, and the like. The communication unit210may transmit and receive data to and from an external device such as the AI device100. The memory230may store a model (or an artificial neural network231) which is being trained or has been trained through the learning processor240. The learning processor240may train the artificial neural network231by using the learning data. The learning model may be used with being mounted on the AI server200of the artificial neural network or with being mounted on the external device such as the AI device100. The learning model may be implemented in hardware, software, or by a combination of hardware and software. When the learning model is partially or wholly implemented in software, one or more instructions constituting the learning model may be stored in the memory230. The processor260may infer a result value for a new input data by using the learning model and may generate responses or control commands based on the inferred result value. FIG.3shows an AI system according to the embodiment of the present invention. Referring toFIG.3, in the AI system1, one or more of the AI server200, a robot100a, an autonomous vehicle100b, an XR device100c, a smartphone100d, or a home appliance100eare connected to a cloud network10. Here, the robot100a, the autonomous vehicle100b, the XR device100c, the smartphone100d, or the home appliance100e, to which an AI technology is applied, may be referred to as AI devices100ato100e. The cloud network10may mean a network which forms a part of a cloud computing infrastructure or exists within the cloud computing infrastructure. Here, the cloud network10may be configured with a 3G network, a 4G or long-term evolution (LTE) network, or a 5G network, etc. That is, the respective devices100ato100eand200constituting the AI system1may be connected to each other through the cloud network10. The respective devices100ato100eand200can communicate with each other through base stations, and also, they can communicate directly with each other without base stations. The AI server200may include a server which performs artificial intelligence processing and a server which performs operations on big data. The AI server200may be connected through the cloud network10to at least one of the robot100a, the autonomous vehicle100b, the XR device100c, the smartphone100d, or the home appliance100ewhich are AI devices that constitute the AI system1. The AI server200may support at least a portion of the artificial intelligence processing of the connected AI devices100ato100e. Here, the AI server200in lieu of the AI devices100ato100emay train the artificial neural network in accordance with the machine learning algorithm and may directly store the learning model or transmit to the AI devices100ato100e. Here, the AI server200may receive input data from the AI devices100ato100e, may infer a result value for the received input data by using the learning model, may generate a response or a control command based on the inferred result value, and may transmit the response or the control command to the AI devices100ato100e. Alternatively, the AI devices100ato100emay infer the result value for the input data by directly using the learning model, and may generate a response or a control command based on the inference result value. Hereinafter, various embodiments of the AI devices100ato100eto which the above-described technology is applied will be described. The AI devices100ato100eshown inFIG.3may be regarded as a specific embodiment of the AI device100shown inFIG.1. The AI technology is applied to the robot100aand the robot100amay be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, or the like. The robot100amay include a robot control module for controlling its operations, and the robot control module may mean a software module or may mean a chip obtained by implementing the software module by hardware. The robot100auses sensor information obtained from various kinds of sensors, thereby obtaining the state information of the robot100a, detecting (recognizing) ambient environment and objects, generating map data, determining a travel path and a driving plan, determining a response to user interaction, or determining the operation. Here, in order to determine the travel path and the driving plan, the robot100amay use the sensor information obtained from at least one sensor among a lidar, a radar, and a camera. The robot100amay perform the above operations by using the learning model composed of at least one artificial neural network. For example, the robot100amay recognize ambient environment and objects by using the learning model and may determine the operation by using information on the recognized ambient environment or the recognized object. Here, the learning model may be trained directly by the robot100aor may be trained by external devices such as the AI server200, etc. Here, the robot100amay perform the operation by producing a result through the direct use of the learning model and may also perform the operation by transmitting the sensor information to external devices such as the AI server200, etc., and by receiving the result produced accordingly. The robot100amay use at least one of the map data, the object information detected from the sensor information, or the object information obtained from the external device to determine the travel path and the driving plan, and may be made to travel along the determined travel path and driving plan by controlling a driving unit. The map data may include object identification information on various objects disposed in a space where the robot100amoves. For example, the map data may include the object identification information on fixed objects such as a wall, a door, etc., and movable objects such as a flowerpot, a desk, etc. Also, the object identification information may include names, types, distances, locations, etc. Also, the robot100amay perform the operation or travel by controlling the driving unit based on the control/interaction of the user. Here, the robot100amay obtain intent information of the interaction according to the action or voice utterance of the user and may determine a response based on the obtained intent information and perform the operation. The AI technology is applied to the autonomous vehicle100b, and the autonomous vehicle100bmay be implemented as a mobile robot, a vehicle, an unmanned flying vehicle, or the like. The autonomous vehicle100bmay include an autonomous driving control module for controlling an autonomous driving function, and the autonomous driving control module may mean a software module or a chip obtained by implementing the software module by hardware. The autonomous driving control module may be included in the autonomous vehicle100bas a component thereof, or may be connected to the autonomous vehicle100bas a separate external hardware. The autonomous vehicle100buses sensor information obtained from various kinds of sensors, thereby obtaining the state information of the autonomous vehicle100b, detecting (recognizing) ambient environment and objects, generating map data, determining a travel path and a driving plan, or determining the operation. Here, in order to determine the travel path and the driving plan, the autonomous vehicle100b, as with the robot100a, may use the sensor information obtained from at least one sensor among the lidar, the radar, and the camera. In particular, the autonomous vehicle100bmay recognize environment or objects of an area where a view is blocked or an area spaced apart by a distance larger than a certain distance, by receiving the sensor information from s, or may receive the information directly recognized by external devices. The autonomous vehicle100bmay perform the above operations by using the learning model composed of at least one artificial neural network. For example, the autonomous vehicle100bmay recognize ambient environment and objects by using the learning model and may determine a driving line by using information on the recognized ambient environment or the recognized object. Here, the learning model may be trained directly by the autonomous vehicle100bor may be trained by external devices such as the AI server200, etc. Here, the autonomous vehicle100bmay perform the operation by producing a result through the direct use of the learning model and may also perform the operation by transmitting the sensor information to external devices such as the AI server200, etc., and by receiving the result produced accordingly. The autonomous vehicle100bmay use at least one of the map data, the object information detected from the sensor information, or the object information obtained from the external device to determine the travel path and the driving plan, and may be made to travel along the determined travel path and driving plan by controlling a driving unit. The map data may include object identification information on various objects disposed in a space (e.g., a road) where the autonomous vehicle100btravels. For example, the map data may include the object identification information on fixed objects such as a street light, rock, buildings, etc., and movable objects such as vehicles, pedestrians, etc. Also, the object identification information may include names, types, distances, locations, etc. Also, the autonomous vehicle100bmay perform the operation or travel by controlling the driving unit based on the control/interaction of the user. Here, the autonomous vehicle100bmay obtain intent information of the interaction according to the action or voice utterance of the user and may determine a response based on the obtained intent information and perform the operation. The AI technology is applied to the XR device100cand the XR device100cmay be implemented as a head-mount display (HMD), a head-up display (HUD) provided in the vehicle, a television, a mobile phone, a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a stationary robot, a mobile robot, or the like. The XR device100cmay analyze three-dimensional point cloud data or image data obtained from various sensors or the external devices, and may generate position data and attribute data for the three-dimensional points, thereby obtaining information on the surrounding space or the real object, and rendering and outputting an XR object to be output. For example, the XR device100cmay cause the XR object including additional information on the recognized object to be output in correspondence to the recognized object. The XR device100cmay perform the above-described operations by using the learning model composed of at least one artificial neural network. For example, the XR device100cmay recognize the real object from the three-dimensional point cloud data or the image data by using the learning model, and may provide information corresponding to the recognized real object. Here, the learning model may be directly trained by the XR device100c, or may be trained by the external device such as the AI server200. Here, the XR device100cmay perform the operation by producing a result through the direct use of the learning model and may also perform the operation by transmitting the sensor information to external devices such as the AI server200, etc., and by receiving the result produced accordingly. The AI technology and an autonomous driving technology are applied to the robot100a, and the robot100amay be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, or the like. The robot100ato which the AI technology and the autonomous driving technology are applied may refer to a robot itself having the autonomous driving function or the robot100ainteracting with the autonomous vehicle100b. The robot100ahaving the autonomous driving function may be collectively referred to as a device that moves for itself along a given route even without user control or moves by determining the route by itself. The robot100ahaving the autonomous driving function and the autonomous vehicle100bmay use a common sensing method so as to determine at least one of the travel path and the driving plan. For example, the robot100ahaving the autonomous driving function and the autonomous vehicle100bmay determine at least one of the travel path and the driving plan by using the information sensed through the lidar, the radar, and the camera. The robot100athat interacts with the autonomous vehicle100bexists separately from the autonomous vehicle100b. Inside or outside the autonomous vehicle100b, the robot100amay perform operations associated with the autonomous driving function of the autonomous vehicle100bor associated with the user who has ridden on the autonomous vehicle100b. Here, the robot100athat interacts with the autonomous vehicle100bmay control or assist the autonomous driving function of the autonomous vehicle100bby obtaining the sensor information on behalf of the autonomous vehicle100band providing the sensor information to the autonomous vehicle100b, or by obtaining the sensor information, generating the ambient environment information or the object information, and providing the information to the autonomous vehicle100b. Alternatively, the robot100athat interacts with the autonomous vehicle100bmay monitor the user who has ridden on the autonomous vehicle100b, or may control the function of the autonomous vehicle100bthrough the interaction with the user. For example, when it is determined that the driver is in a drowsy state, the robot100amay activate the autonomous driving function of the autonomous vehicle100bor assist the control of the driving unit of the autonomous vehicle100b. Here, the function of the autonomous vehicle100bcontrolled by the robot100amay include not only the autonomous driving function but also the function provided by a navigation system or an audio system provided within the autonomous vehicle100b. Alternatively, outside the autonomous vehicle100b, the robot100athat interacts with the autonomous vehicle100bmay provide information to the autonomous vehicle100bor assist the function of the autonomous vehicle100b. For example, the robot100amay provide the autonomous vehicle100bwith traffic information including signal information and the like such as a smart traffic light, and may automatically connect an electric charger to a charging port by interacting with the autonomous vehicle100blike an automatic electric charger of an electric vehicle. The AI technology and the XR technology are applied the robot100a, and the robot100amay be implemented as a guide robot, a transport robot, a cleaning robot, a wearable robot, an entertainment robot, a pet robot, an unmanned flying robot, a drone, or the like. The robot100ato which the XR technology is applied may refer to a robot that is subjected to control/interaction in an XR image. In this case, the robot100amay be separated from the XR device100cand interwork with each other. When the robot100awhich is subjected to control/interaction in the XR image obtains the sensor information from the sensors including a camera, the robot100aor the XR device100cmay generate the XR image based on the sensor information, and the XR device100cmay output the generated XR image. The robot100amay operate based on the control signal input through the XR device100cor based on the user interaction. For example, the user may check the XR image corresponding to a view of the robot100ainterworking remotely through the external device such as the XR device100c, may control the autonomous driving path of the robot100athrough the interaction, may control the operation or driving, or may check information on the surrounding objects. The AI technology and the XR technology are applied the autonomous vehicle100b, and the autonomous vehicle100bmay be implemented as a mobile robot, a vehicle, an unmanned flying vehicle, or the like. The autonomous vehicle100bto which the XR technology is applied, may refer to an autonomous vehicle equipped with a means for providing an XR image or an autonomous vehicle that is subjected to control/interaction in an XR image. Particularly, the autonomous vehicle100bthat is subjected to control/interaction in an XR image may be separated from the XR device100cand interwork with each other. The autonomous vehicle100bequipped with the means for providing an XR image may obtain the sensor information from the sensors including a camera and may output the XR image generated based on the obtained sensor information. For example, the autonomous vehicle100bmay include a HUD to output an XR image, thereby providing a passenger with an XR object corresponding to a real object or an object in the screen. Here, when the XR object is output to the HUD, at least a part of the XR object may be output so as to overlap an actual object to which the passenger's gaze is directed. Meanwhile, when the XR object is output to the display provided within the autonomous vehicle100b, at least a part of the XR object may be output so as to overlap the object in the screen. For example, the autonomous vehicle100bmay output XR objects corresponding to objects such as a lane, another vehicle, a traffic light, a traffic sign, a two-wheeled vehicle, a pedestrian, a building, and the like. When the autonomous vehicle100bthat is subjected to control/interaction in the XR image obtains the sensor information from the sensors including a camera, the autonomous vehicle100bor the XR device100cmay generate the XR image based on the sensor information, and the XR device100cmay output the generated XR image. The autonomous vehicle100bmay operate based on the control signal input through the external device such as the XR device100cor based on the user interaction. Hereinafter, various embodiments of a control method of the robot100awill be described as an example of the AI device100. However, the following embodiments are not limited to be applicable only to the robot100a, and may be applied to other AI devices100bto100ewithin a range in which the spirit is not changed. For example, the following embodiments may be applied to the autonomous vehicle100b, the XR device100c, etc., implemented as a mobile robot or the like. FIGS.4A to4Care perspective views of the robot according to various embodiments, andFIG.5is a block diagram showing the configuration of the robot according to the embodiment of the present invention. A robot300is the AI device100described with reference toFIGS.1to3, and may be, for example, the robot100ashown inFIG.3. In description of the embodiments ofFIGS.4A to4C and5, repetitive descriptions of components the same as or similar to those shown inFIGS.1to3will be omitted. Referring toFIG.4A, the robot300may include a drive wheel301and may be a mobile robot capable of moving in an autonomous driving method and/or in a following driving method. The drive wheel301may be provided by using a motor (not shown). WhileFIG.4shows that the robot300includes four drive wheels301, the embodiment of the present disclosure is not limited thereto. That is, the robot300may be provided with two or more drive wheels301, and, for example, may be driven by a front wheel or a rear wheel when the robot300includes four or more drive wheels301. AlthoughFIG.4Ashows that the robot300moves by using the drive wheel301, the embodiment is not limited to this. That is, in various embodiments, the robot300may be configured to include, as shown inFIG.4B, at least one leg303, etc., and to move. The leg303is able to move the robot300by means of a motor and/or an actuator, etc., which is manipulated by a manipulator within the robot300. In the following embodiments, the drive wheel301, the leg303, the motor, the actuator and/or the manipulator may be designated as a driving part, as a control device capable of moving the robot300. The robot300may include at least one container302capable of receiving goods to be delivered. The at least one container302may be stacked in the vertical direction and/or in the lateral direction and may be integrally molded or physically fastened. Each container302may be configured to load goods individually by being composed of a drawer-type module shown inFIG.4Aor by including a door. Alternatively, each container302may be, as shown inFIGS.4B and4C, configured in the form of a deck. Particularly, in the embodiment shown inFIG.4C, the deck type container302may be configured to be able to move up and down by means of an elevating member. In other various embodiments, when the robot300has a robot arm, the robot300is also able to grip goods by using the robot arm without a separate container302. In various embodiments, the robot300may further include a physical structure such as a sliding module (shown inFIGS.21to24) for unloading goods loaded in the container302, the elevating module, or the robot arm, etc. Such a physical structure may include at least one joint which is controlled by the manipulator including the actuator and/or the motor. Referring toFIG.5, the robot300may include a communication circuit310, an input device320, a camera330, an output device340, a memory350, a processor360, and a motor370. The communication circuit310may perform communication with external devices (e.g., the AI server200shown inFIG.2) by using wired and wireless communication channels. The communication circuit310may include, for example, an antenna circuit, a wireless communication circuit, a wired communication circuit, etc. The communication circuit310may convert a signal received from the external device into data that can be processed by the processor360and transmit the data to the processor360. In various embodiments, the communication circuit310may communicate with the terminal of a manager who wants to send goods and the terminal of a recipient who receives the goods. The communication circuit310may communicate with the terminal of the manager or the terminal of the recipient through a server (e.g., the AI server200) and the like by using a long-range wireless communication technology (e.g., Wi-Fi, LTE, LTE-A, 5G, etc.), or may communicate directly with the terminal of the manager or the terminal of the recipient by using a short-range wireless communication technology (e.g., Bluetooth, NFC, etc.). The input device320may receive data or commands for the control of components from the outside, for example, a user. The user may be, for example, a manager who sends goods or a recipient who receives the goods. The input device320may include, for example, a camera, a microphone, a touch panel, a button, a switch, a keypad, a jog dial, a jog switch, and the like. In various embodiments, the input device320may receive information on goods (e.g., the type, quantity, size of the goods, whether to handle the goods with care, invoice number of the goods, etc.) from the manager who wants to deliver the goods and information on the recipient (e.g., the name and contact address of the recipient, and the address of a delivery destination, etc.). In addition, in various embodiments, the input device320may receive recipient identification information (e.g., an ID, password, OTP, image identification information (fingerprint, face, etc.), etc.). In various other embodiments, the above-listed information may be received from external devices (e.g., a server, the terminal of the manager, the terminal of the recipient, etc.) through the communication circuit310. In the embodiment, when the robot300is implemented by a communication input method that receives the user input only from external devices, the input device320may be omitted. The camera330may capture still images and videos. In one embodiment, the camera330may include one or more lenses, an image sensor, an image signal processor, a flash, and the like. The camera330may be mounted on various parts of the robot300. WhileFIGS.4A to4Cshow an example in which the camera330is provided in the front of the robot300, the camera330may be installed on the rear, side, etc., of the robot300in other embodiments. The camera330may be provided as a fixed type as shown inFIGS.4A to4C. Also, in another embodiment, the camera330may be provided rotatably up and down and/or right and left. In the embodiment, the camera330is configured with a stereo camera, an array camera, a 3D vision sensor, a time of flight (TOF) sensor, etc., and, thus, is able to obtain depth information of an image. Alternatively, the camera330may be, as a mono camera, arranged in various positions of the robot300. In such an embodiment, through an image obtained from each mono camera, the depth information of the image may be obtained. In various embodiments, the camera330may operate as the input device320that obtains information on goods by capturing a document, a barcode, a QR code, etc., in which the information on goods is recorded. In addition, the camera330may operate as the input device320that obtains recipient identification information by capturing a recipient's face, fingerprint, or the like. The output device340may visually, audibly, and/or tactually output the information through a display, a speaker, a haptic actuator, etc. In the embodiment, the display is implemented integrally with a touch panel for receiving user inputs and thus configure a touch screen. In the embodiment, information to be output from the robot300may be transmitted to an external device through the communication circuit310or the like. For example, the information generated from the robot300may be transmitted to the terminal of the manager and the terminal of the recipient through a long-range wireless communication technology (e.g., Wi-Fi, LTE, LTE-A, 5G, etc.), and may be output visually, aurally, and/or tactually from the terminal of the manager and the terminal of the recipient. Also, for example, the information generated from the robot300may be transmitted to another display or speaker, etc., through a short-range wireless communication technology (e.g., Bluetooth, NFC, etc.), and may be output visually and/or aurally from another display or speaker. In the embodiment, when the robot300is implemented to have a communication output method that outputs the generated information only through an external device, the output device340may be omitted. The memory350may store various data used by at least one component (e.g., the processor360) of the robot300. The data may include, for example, software and input/output data related to the software. The memory350may include a volatile memory or a nonvolatile memory. In various embodiments, when the robot300operates in a cloud environment, the memory350may be implemented by using a remote memory location that is allocated through a cloud computing system and is accessible by the robot300through communication. In this embodiment, the robot300includes the non-volatile memory, so that it may temporarily store some or all of both raw data before being stored in the remote memory location and/or data to be stored in the remote memory location. The processor360may control the above-described components in order to perform the robot control method according to the present invention. In various embodiments, the processor360may extract feature points from the image captured through the camera330and may analyze the extracted feature points to determine a target position of the robot300. The processor360may control a manipulator (not shown) driven by an actuator or a motor370such that the robot300moves to the determined target position. Further, the processor360may control the manipulator such that the robot300unloads the goods at the target position or waits at the target position. Hereinafter, the control method of the robot300by the processor360will be described in more detail.FIG.6is a flowchart showing the control method of the robot according to the embodiment. The control method of the robot300shown inFIG.6may be performed after the goods are loaded by the manager and the robot300arrives at a delivery destination based on pre-input information on the recipient. After arriving at the delivery destination, the robot300may obtain information on the door from the image of the delivery position (410). Specifically, the robot300may perform a door search based on the image (411). The robot300may continuously capture the surroundings including the front of the robot300through the camera330. While continuously capturing, the robot300may move within a predetermined range. For example, if a distance between the robot300and the door is extremely close and thus the entire door does not come within the viewing angle of the camera330, the robot300may not correctly search the door. Therefore, the robot300may capture the surroundings while moving the current position such that a whole object that can be recognized as the door comes within the viewing angle of the camera330. The robot300may identify objects in the continuously captured images and recognize the door. For example, the robot300may detect a contour from the captured images by using a mask such as Sobel, Roberts, Canny, Prewitt, etc., and determine whether the detected contour has a rectangular shape that can be generally recognized as a door or not. If necessary, the robot300may perform preprocessing on the captured image. The preprocessing can perform cropping, scaling, image rotation, noise removal, contrast control, blur removal, background region removal, distortion correction, etc., for a specific region. In the embodiment, the robot300may perform a door search for a predetermined period of time. If the door is not searched, the robot300may determine the door as not being around. In this embodiment, the robot300may determine the current position as the target position without separately determining the target position to be described later. When a door is searched from the image, the robot300may extract a feature point of the door from the image (412). The feature point may include, for example, hinges, handles, and the like. The robot300may extract the feature points of the door by using an active contour model (ACM), an active shape model (ASM), an active appearance model (AAM) or known feature point extraction techniques such as a supervised descent method (SDM). In the embodiment, the robot300may further extract depth information on the searched door and/or the surrounding wall surface of the door (413). When the camera330is implemented as a stereo camera, an array camera, a 3D vision sensor, and the like, the robot300may directly obtain the depth information from the camera330. When the camera330is implemented as a mono camera, the robot300may obtain images from a plurality of the cameras330disposed in various positions of the robot300and may determine the depth information by comparing the disparity between the obtained images. In addition, the robot300may determine the size of the door, for example, the width of the door, based on the depth information. By using the size of the door and the depth information of the door based on the pixel size in the image, the actual size of the door can be calculated. On the other hand, whileFIG.6shows that the robot300extracts the feature point first for the searched door and then extracts the depth information, this embodiment is not limited thereto. That is, in various embodiments, the feature points may be extracted after the depth information is obtained from the image, or the feature point extraction may be performed simultaneously with the depth information acquisition. Also, at least one of the feature point extraction and the depth information acquisition may be omitted. For example, when the opening direction of the door can be determined only by the extracted feature point (e.g., a hinge) as described below (e.g., when a hinge is, as shown inFIGS.7and8, extracted as the feature point, the door is opened to the side where the robot300is located), the extraction of the depth information may be omitted. Similarly, when it is possible to determine the opening direction of the door only by the depth information (e.g., as shown inFIG.9, when the door is identified as being deeper than the surrounding wall surfaces, it is determined that the door is opened to the opposite side to the position of the robot300), the feature point extraction may be omitted. When the robot300includes the learning processor130described with reference toFIG.1, the above-described image analysis may be performed through the learning processor130. For example, when an image captured through the camera330is input to the learning processor130, the learning processor130may recognize a door included in the input image. The learning processor130can analyze the feature point, the depth information, and the size of the recognized door. The robot300may determine the opening direction of the door based on the obtained information on the door (420). Specifically, the robot300may determine the opening direction of the door based on at least one of the feature point, the depth information, and the size which are extracted from the image. FIGS.7to10show various embodiments of the opening direction of a door. In various embodiments, a hinge is formed in the opening direction of the door. As shown inFIGS.7and8, when the door is opened to the side where the robot300is located, the hinge1may be extracted as the feature point. Conversely, when the door is, as shown inFIG.9, opened to the opposite side to the position of the robot300, or when the door is, as shown inFIG.10, a sliding door, the hinge may not be extracted as the feature point. Also, the hinge1may be formed on the opposite side to a side where the door is opened. For example, when the door is opened on the left side, the hinge1may be, as shown inFIG.7, disposed on the right side of the door, and when the door is opened on the right side, the hinge1may be, as shown inFIG.8, disposed on the left side of the door. In various embodiments, a handle may be formed on a side the door is opened. For example, when the door is opened on the left side, the handle2is, as shown inFIG.7, disposed on the left side of the door, and when the door is opened on the right side, the handle2is, as shown inFIG.8, disposed on the right side of the door. As described above, the robot300may determine the opening direction of the door based on whether or not the hinge is extracted as the feature point from the image and of the arrangement position of the handle. In various embodiments, the robot300may further determine the opening direction of the door based on the depth information. In the embodiment, as shown inFIG.9, when the door is installed deeper than the surrounding wall surfaces, the door may be opened to the opposite side to the position of the robot. Accordingly, the robot300may determine that the door is opened to the opposite side to the position of the robot when the depth of the door is greater than that of the surrounding wall surface by a threshold or more. The robot300may determine the target position based on the determined opening direction of the door (430). The target position may not overlap with the moving line of the door. That is, the target position may be a position in which the moving line of the recipient moving through the door from the opposite side of the door is not disturbed by the robot300or the goods unloaded from the robot300. When the robot300waits at the determined target position or the robot300unloads the goods at the determined target position, the moving line of the door and the moving line of the recipient moving to the door may not be disturbed. In the embodiment, the target position may be a position where the robot300can stably unload the goods, for example, a position where a wall surface that the robot300can support is located. FIGS.11to14show various embodiments of the target position determined by the robot. In the embodiment, when the hinge1is extracted as the feature point, the robot300may determine, as shown inFIG.11, a first position10that is in contact with the wall surface of one side of the door as the target position. Here, the robot300may select a wall surface disposed further away from the hinge1among the wall surfaces on both sides of the door as the wall surface. Generally, the door is opened in the direction in which the hinge1is viewed and recognized, and, with the hinge as an axis, is opened on the opposite side to the hinge1. Therefore, when the hinge1is viewed and recognized from the right side of the door, the robot300may determine that the door is opened toward the robot300from the left side as shown inFIG.11. In addition, the robot300may determine, as the target position, the first position10that is in contact with the left wall surface disposed further away from the hinge1among the wall surfaces on both sides of the door. Similarly, when the hinge1is viewed and recognized from the left side of the door, the robot300may determine that the door is opened toward the robot300from the right side as shown inFIG.12. In addition, the robot300may determine, as the target position, a second position20that is in contact with the right wall surface disposed further away from the hinge1among the wall surfaces on both sides of the door. When the hinge1is extracted from the image, the robot300may further extract the handle2as the feature point. Then, the robot300may select a wall surface disposed adjacent to the handle2among the wall surfaces on both sides of the door as the wall surface. Generally, the door is opened on the side where the handle2is disposed. Therefore, when the hinge is extracted from the image and the handle is extracted from the left side of the door, the robot300may determine that the door is opened from the left side of the door to the side where the robot300is located. In addition, the robot300may determine the first position10that is in contact with the wall surface on the left side of the door as the target position, as shown inFIG.11. Similarly, when the hinge is extracted from the image and the handle2is extracted from the right side of the door, the robot300may determine that the door is opened toward the robot300from the right side of the door. In addition, the robot300may determine the second position20that is in contact with the wall surface on the right side of the door as the target position, as shown inFIG.12. In the embodiment, when the hinge1is extracted as the feature point, the robot300may determine, as the target position, a third position30spaced apart from the door by a distance corresponding to the width of the door, as shown inFIGS.11and12. Such an embodiment may be applied, for example, when a sufficient space for the robot300to enter the first position10or the second position20is not formed. However, the present embodiment is not limited thereto, and the third position30may be determined as the target position with a higher priority than that of the first position10or the second position20of the door. In the above embodiments, the recipient can safely open the door without being disturbed by the robot300located at the target positions10,20, and30or by the goods unloaded at the target positions10,20, and30. In addition, the recipient can recognize the robot300or the unloaded goods disposed in the first position10or the second position20as soon as the recipient opens the door. In the embodiment, when the hinge1is not extracted as the feature point, the robot300may determine a fourth position40that is in contact with the door as a target position, as shown inFIGS.13and14. When the hinge1is not extracted as the feature point, the door may be, as shown inFIG.13, opened to the opposite side to the position of the robot300or may be, as shown inFIG.14, a sliding door. In this embodiment, even if the robot300or the goods unloaded from the robot300is located in the fourth position40, this does not interfere with the opening of the door by the recipient. Rather, when the robot300is arranged in the fourth position40or the goods unloaded from the robot300are arranged in the fourth position40, the recipient who has opened the door can more easily access the robot300or the goods. Meanwhile, as shown inFIGS.11and12, when the target position is determined as the first position10or the second position20, the target position may contact the wall surface. Such a target position enables the robot300to unload the goods more stably by supporting the wall surface when the goods are unloaded by the robot300. The method in which the robot300unloads the goods by using the wall surface will be described below in more detail with reference toFIGS.21to24. When the target position is determined, the robot300may perform predetermined operations with respect to the target position.FIGS.15to20show various operations of the robot for the target position. For example, the robot300may wait at the target position as shown inFIGS.15to17or may, as shown inFIGS.18and19, move within the target position or around the target position so as to unload the goods at the target position (440). In the embodiment, when the robot300is in the current target position or can unload the goods from the current position to the target position, the robot300may, as shown inFIG.20, rotate without moving, if necessary. In the embodiment, if the goods have to be delivered directly to the recipient, the robot300may wait at the target position (450). The robot300may identify/authenticate the recipient by using identification information and/or authentication information which are stored in advance or are received through the communication circuit310, and may wait at the target position until the recipient receives the goods from the container302. In various embodiments, the robot300may transmit data for notifying that the robot is waiting at the delivery destination to the server, the terminal of the manager, the terminal of the recipient, etc. In various embodiments, when the goods are not received for a predetermined period of time, the robot300may return to a place designated by the manager (for example, a distribution center or a delivery vehicle). When the goods do not need to be delivered directly to the recipient, the robot300may unload the goods at the target position (460). In the embodiment, when the robot300unloads the goods by using an elevating device or a robot arm, the robot300may be damaged or overturned by rebound or vibration. In addition, when the goods are discharged to the outside of the container302for being unloaded, the center of gravity of the robot moves toward the outside of the robot300by the weight of the goods, so that the robot300may be overturned. In order to prevent this problem, in various embodiments, the target position may be determined as a position to which the wall surface is adjacent, such as the first position10or the second position20of the door. The robot300may unload the goods while supporting the wall surface at the target position. FIGS.21and22show a method in which the robot unloads goods in accordance with the embodiment.FIGS.23and24show a method in which the robot unloads goods in accordance with another embodiment. The robot300supports, as shown inFIG.21, the goods discharged from the container302on the wall surface W located in front of the robot. Whether or not the goods are sufficiently supported on the wall surface W may be determined through an image captured by the camera330or by the sensor140capable of measuring an impact force or a reflection force between the goods and the wall surface W. When the goods are supported on the wall surface W, the robot300can be prevented from overturning in the direction of the goods even if the center of gravity moves in the direction of the goods. Then, as shown inFIG.22, the robot300may put the goods on the floor while supporting the goods on the wall surface W by controlling the sliding module through the manipulator. In another embodiment, the rear of the robot300may be, as shown inFIG.23, supported on the wall surface W. To this end, the robot300can rotate at the target position. With the rear supported on the wall surface W, the robot300may discharge the goods. Then, the robot300may, as shown inFIG.24, put the discharged goods on the floor. In the above embodiments, the rebound or vibration applied to the robot300when the goods are unloaded can be absorbed through the wall surface W, so that it is possible to prevent that the robot300is tilted by the movement of the center of gravity. The above-described operations may be performed by the processor360of the robot300. However, in the embodiment, when the robot300operates in a cloud environment, at least some of the above-described operations may be performed by a server (e.g., the AI server200) that communicates with the robot300through the cloud network. For example, when the image captured by the camera330is transmitted to the server through the communication circuit310, the server may search the door, extract feature points, and obtain depth information through image analysis. Then, the server may determine the target position of the robot300based on the information on the door. The server may transmit a move command including, for example, coordinates and/or vector data as information on the determined target position to the robot300, and the robot300moves to the target position in accordance with the move command received from the server. In addition, the server may transmit a command to wait or a command to unload the goods to the robot300, and the robot300may wait at the target position or unload the goods in accordance with the received command. FIG.25is a flowchart showing a method for determining the opening direction of the door in accordance with the embodiment. The method shown inFIG.25is an example of the method for determining the door opening direction described with reference toFIG.6. In the embodiment shown inFIG.25, the robot300determines the opening direction of the door by first considering the depth information and the feature points extracted from the image. In the description of the embodiment ofFIG.25, detailed descriptions of operations the same as or similar to those described with reference toFIG.6will be omitted. Referring toFIG.25, the robot300may extract the depth information from an image (501). The robot300may determine whether the depth of the door is greater than the depth of the surrounding wall surface by comparing the depth information of the door and the depth information of the surrounding wall surface of the door (502). As described with reference toFIG.9, when the door is deeper than the surrounding wall surface, the door is generally opened to the opposite side to the position of the robot300. Therefore, based on the depth information, if it is determined that the depth of the door is greater than the depth of the surrounding wall surface by a threshold or more, the robot300may determine that the door is opened to the opposite side to the position of the robot300(503). When the depth between the door and the surrounding wall surface is not greater than the threshold, that is, when the depth of the door is substantially the same as the depth of the surrounding wall surface, the robot300may extract feature points from the image. In the embodiment, the robot300may first extract the hinge as the feature point (504). As described with reference toFIGS.9and10, when the door is not opened to the side where the robot300is located, that is, when the door is opened to the opposite side to the position of the robot300, or when the door is configured with a sliding door, the hinge may not be viewed and recognized in the current position of the robot300. Accordingly, when the hinge is not extracted as the feature point in the image (505), the robot300may determine that the door is opened to the opposite side to the position of the robot300(503). When the hinge is extracted from the image (505), the robot300may determine that the door is opened to the side where the robot300is located (506). In this embodiment, the robot300may further extract the handle as the feature point (507). As described with reference toFIGS.7and8, the door may be opened on one side where the handle is disposed. When the handle is extracted from the left side of the door (508), the robot300may determine that the door is opened to the side where the robot300is located, and is opened on the left side of the door (509). Similarly, when the handle is extracted from the right side of the door (509), the robot300may determine that the door is opened to the side where the robot300is located, and is opened on the right side of the door (510). InFIG.25, the robot300considers the depth information first rather than the feature points and determines the opening direction of the door. However, the technical spirit of the present embodiment is not limited thereto. That is, in other various embodiments, the robot300may first determine the opening direction of the door based on the feature point, for example, a hinge, and additionally or complementally use the depth information to determine the opening direction of the door. Alternatively, the robot300may determine the opening direction of the door by using only the feature points. It can be understood by those skilled in the art that the embodiments can be embodied in other specific forms without departing from its spirit or essential characteristics. Therefore, the foregoing embodiments and advantages are merely exemplary and are not to be construed as limiting the present invention. The scopes of the embodiments are described by the scopes of the following claims rather than by the foregoing description. All modification, alternatives, and variations derived from the scope and the meaning of the scope of the claims and equivalents of the claims should be construed as being included in the scopes of the embodiments. Various embodiments of the present disclosure provide a robot capable of unloading goods or being waiting at an appropriate position where the moving line of a recipient is not disturbed, and provide a method for controlling the robot. Various embodiments of the present disclosure provide the robot which judges door information from images and determines a moving position based on the judged information, and provide a method for controlling the robot. Various embodiments of the present disclosure provide the robot which waits for a recipient or unloads goods at the determined moving position, and provide a method for controlling the robot. Various embodiments of the present disclosure provide the robot which unloads goods by using a wall surface identified from the image as a support, and provide a method for controlling the robot. One embodiment is a robot including: at least one motor provided in the robot; a camera configured to capture an image of a door; and a processor configured to determine, based on at least one of depth information and a feature point identified from the image, a target position not overlapping with a moving area of the door, and control the at least one motor such that predetermined operations are performed with respect to the target position. The feature point may include at least one of a handle and a hinge of the door. When the hinge is extracted from the image, the processor may determine a position that is in contact with a wall surface disposed further away from the hinge among wall surfaces on both sides of the door as the target position. When the hinge is not identified from the image, the processor may determine a position that is in contact with the door as the target position. When the hinge is identified from the image, the processor may determine a width length of the door and determines a position spaced apart from the door by the width length as the target position. When the hinge and the handle are extracted from the image, the processor may determine a position that is in contact with a wall surface disposed adjacent to the handle among wall surfaces on both sides of the door as the target position. When a depth of the door is greater than depths of surrounding wall surfaces of the door by a threshold or more, the processor may determine a position that is in contact with the door as the target position. The processor may control the at least one motor such that the robot moves or rotates with respect to the target position. The processor may control the at least one motor such that the robot unloads goods at the target position. As the at least one motor is controlled, the robot may discharge the goods to the outside of a body of the robot and unload the discharged goods while supporting the goods on a wall surface that is in contact with the target position. As the at least one motor is controlled, the robot may support one side of a body of the robot on the wall surface that is in contact with the target position, discharge the goods to the other side that faces the one side, and unload the discharged goods. Another embodiment is a method for controlling the robot according to the embodiment of the present invention includes: capturing an image of a door; determining, based on at least one of depth information and a feature point extracted from the image, a target position not overlapping with a moving area of the door; and controlling at least one motor which is provided in the robot such that predetermined operations are performed with respect to the target position. The feature point may include at least one of a handle and a hinge of the door. The determining the target position may include: extracting the hinge from the image; and determining a position that is in contact with a wall surface disposed further away from the hinge among wall surfaces on both sides of the door as the target position. The method may include determining a position that is in contact with the door as the target position when the hinge is not extracted from the image. The determining the target position may include: extracting the hinge from the image; determining a width length of the door; and determining a position spaced apart from the door by the width length as the target position. The determining the target position may include: extracting the hinge and the handle from the image; and determining a position that is in contact with a wall surface disposed adjacent to the handle among wall surfaces on both sides of the door as the target position. The determining the target position may include determining a position that is in contact with the door as the target position when a depth of the door is greater than depths of surrounding wall surfaces of the door by a threshold or more. The controlling the at least one motor may include controlling the at least one motor such that the robot moves or rotates with respect to the target position. The controlling the at least one motor may control the at least one motor such that the robot unloads goods at the target position. In certain implementations, a robot comprises: at least one motor; a camera configured to capture an image of a door; and a processor configured to: determine, based on a feature point identified from the image, a target position not overlapping a moving area of the door, and control the at least one motor such that at least one operation is performed with respect to the target position, wherein the feature point includes at least one of a handle or a hinge of the door. When the hinge of the door is identified in the image, a first wall surface is positioned at a first side of the door that is further from the hinge, and second wall surface is positioned at a second side of the door that is closer to the hinge, and the processor selects, as the target position, a position in front of the first wall surface. When the hinge of the door is not identified in the image, the processor selects a position that is in front of the door as the target position. When the hinge is identified in the image, the processor determines a width of the door and selects a position that is spaced apart from the door by at least the width as the target position. When the hinge and the handle of the door are identified in the image, a first wall surface is positioned at a first side of the door that is further from the hinge and closer to the handle, and second wall surface is positioned at a second side of the door that is closer to the hinge and further from the handle, and the processor selects, as the target position, a position that is in front of the first wall surface. The processor is further to determine the target position based on depth information associated with a depth of a door, and when the depth of the door is greater than depths of one or more surrounding wall surfaces by at least a threshold distance, the processor select a position that is in front of the door as the target position. The processor controls the at least one motor such that the robot at least one of moves or rotates with respect to the target position. The processor controls the at least one motor such that the robot unloads a good at the target position. When the robot is unloading the good, the processor may manage the at least one motor such that the robot discharges the good to an outside of a body of the robot and supports the good on a wall surface adjacent to the target position. Alternatively, when the robot is unloading the good, the processor is further to manage the at least one motor such that a first side of a body of the robot is positioned against a wall surface that is adjacent the target position, and the robot discharges the good to a second side that is opposite to the first side. In another example, a robot comprises: at least one motor to move the robot; a camera configured to capture an image of a door; and a processor configured to: determine a presence or an absence of a hinge of the door in the image, and control the at least one motor to move the robot to avoid a swing path of the door and to perform at least one operation away from the swing path of the door based on the presence or the absence of the hinge in the image. When the presence of the hinge is determined in the image, and the door is positioned between a first wall surface and a second wall surface that is closer to the hinge than the first wall surface, the processor controls the at least one motor to position the robot in front of the first wall surface. When the absence of the hinge is determined in the image, the processor controls the at least one motor to position the robot in front of the door. When the presence of the hinge is determined in the image, the processor determines a width of the door and controls the at least one motor to position the robot to be spaced apart from the door by at least the width of the door. When the presence of the hinge and a presence of a handle of the door are determined in the image, and the door is positioned between a first wall surface and a second wall surface that is farther from the handle than the first wall surface, the processor controls the at least one motor to position the robot in front of the first wall surface. The processor may further: determine depth information associated with a depth of a door relative to a depth of at least one surrounding wall surface, and control the at least one motor to position the robot in front of the door when the depth of the door is greater than the depth of the at least one surrounding wall surface by at least a threshold distance. The processor may control the at least one motor such that the robot at least one of moves or rotates to avoid the swing path of the door. The processor controls the at least one motor such that the robot unloads a good at a position out of the swing path of the door. When the robot is unloading the good, the processor may control the at least one motor such that the robot discharges the good to an outside of a body of the robot and supports the good on a wall surface adjacent to the position out of the swing path of the door. When the robot is unloading the good, the processor may control the at least one motor such that a first side of a body of the robot is positioned against a wall surface that is adjacent to the position out of the swing path of the door, and the robot discharges the good to a second side that is opposite to the first side. Through the robot according to various embodiments and the method for controlling the robot, it is possible to complete delivering goods safely even though a recipient cannot receive the goods directly. Through the robot according to various embodiments and the method for controlling the robot, the recipient is able to receive the goods delivered by the robot without being disturbed the moving line of the recipient. It will be understood that when an element or layer is referred to as being “on” another element or layer, the element or layer can be directly on another element or layer or intervening elements or layers. In contrast, when an element is referred to as being “directly on” another element or layer, there are no intervening elements or layers present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section could be termed a second element, component, region, layer or section without departing from the teachings of the present invention. Spatially relative terms, such as “lower”, “upper” and the like, may be used herein for ease of description to describe the relationship of one element or feature to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “lower” relative to other elements or features would then be oriented “upper” relative to the other elements or features. Thus, the exemplary term “lower” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Embodiments of the disclosure are described herein with reference to cross-section illustrations that are schematic illustrations of idealized embodiments (and intermediate structures) of the disclosure. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the disclosure should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments. Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. | 79,839 |
11858149 | DETAILED DESCRIPTION A robot may be a machine that automatically handles a given task by its own ability, or that operates autonomously. A robot having a function of recognizing an environment and performing an operation according to its own judgment may be referred to as an intelligent robot. The robot may be classified into industrial, medical, household, and military robot, according to the purpose or field of use. The robot may include a driver including an actuator or a motor in order to perform various physical operations, such as moving joints of the robot. A movable robot may be equipped with a wheel, a brake, a propeller, and the like to drive on the ground or fly in the air. The robot may be provided with legs or feet to walk two-legged or four-legged on the ground. Autonomous driving refers to a technology in which driving is performed autonomously, and an autonomous vehicle refers to a vehicle capable of driving without manipulation of a user or with minimal manipulation of a user. For example, autonomous driving may include all of a technology for keeping a driving lane, a technology for automatically controlling a speed such as adaptive cruise control, a technology for automatically driving a vehicle along a determined path, a technology for, if a destination is set, automatically setting a path and driving a vehicle along the path, and the like. A vehicle may include a vehicle having only an internal combustion engine, a hybrid vehicle having both an internal combustion engine and an electric motor, and an electric vehicle having only an electric motor, and may include not only an automobile but also a train, a motorcycle, and the like. The autonomous vehicle may be considered as a robot with an autonomous driving function. FIG.1is a diagram illustrating a robot system according to one embodiment of the present disclosure. The robot system may include one or more robots110and a control server120, and may further include a terminal130. The one or more robots110, the control server120, and the terminal130may be connected to each other via a network140. The one or more robots110, the control server120, and the terminal130may communicate with each other via a base station, but may also communicate with each other directly without the base station. The one or more robots110may perform a task in a space (or an area), and provide information or data related to the task to the control server120. A workspace of the robot may be indoors or outdoors. The robot may operate in a space predefined by a wall, a column, and/or the like. The workspace of the robot may be defined in various ways according to the design purpose, working attributes of the robot, mobility of the robot, and other factors. The robot may operate in an open space that is not predefined. The robot may also sense a surrounding environment and determine the workspace by itself. The one or more robots110may provide their state information or data to the control server120. The state information of the robots110may include, for example, information on the robots110, such as a position, a battery level, durability of parts, replacement cycles of consumables, and the like. The control server120may perform various analysis based on information or data provided by the one or more robots110, and control an overall operation of a robot system based on the analysis result. In one aspect, the control server120may directly control the driving of the robots110based on the analysis result. In another aspect, the control server120may derive and output useful information or data from the analysis result. In still another aspect, the control server120may adjust parameters in the robot system using the derived information or data. The control server120may be implemented as a single server, but may be implemented as a set of a plurality of servers, a cloud server, and/or a combination thereof. The terminal130may share the role of the control server120. In one aspect, the terminal130may obtain information or data from the one or more robots110and provide the obtained information or data to the control server120. Alternatively, the terminal130may obtain information or data from the control server120and provide the obtained information or data to the one or more robots110. In another aspect, the terminal130may be responsible for at least part of the analysis to be performed by the control server120, and may provide the analysis result to the control server120. In still another aspect, the terminal130may receive, from the control server120, the analysis result, information, or data, and may simply output the received analysis result, information, or data. The terminal130may take the place of the control server120(and/or serve/operate in a manner similar to the control server). At least one robot of the one or more robots110may take the place of the control server120(and/or serve/operate in a manner similar to the control server). In this example, the one or more robots110may be connected to communicate with each other. The terminal130may include various electronic devices capable of communicating with the robots110and the control server120. For example, the terminal130may be implemented as a stationary terminal and a mobile terminal, such as a mobile phone, a smartphone, a laptop computer, a terminal for digital broadcast, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, a tablet PC, an ultrabook, a wearable device (for example, a smartwatch, a smart glass, and a head-mounted display (HMD)), a set-top box (STB), a digital multimedia broadcast (DMB) receiver, a radio, a laundry machine, a refrigerator, a vacuum cleaner, an air conditioner, a desktop computer, a projector, and a digital signage. The network140may refer to a network that configures a portion of a cloud computing infrastructure or exists in the cloud computing infrastructure. The network140may be, for example, a wired network such as local area networks (LANs), wide area networks (WANs), metropolitan area networks (MANs), or integrated service digital networks (ISDNs), or a wireless communications network such as wireless LANs, code division multi access (CDMA), Wideband CDMA (WCDMA), long term evolution (LTE), long term evolution-advanced (LTE-A), 5G (generation) communications, Bluetooth, or satellite communications, but is not limited thereto. The network140may include a connection of network elements such as a hub, a bridge, a router, a switch, and a gateway. The network140may include one or more connected networks, for example, a multi-network environment, including a public network such as an Internet and a private network such as a safe corporate private network. Access to the network140may be provided through one or more wire-based or wireless access networks. The network140may support various types of Machine to Machine (M2M) communications, such as Internet of things (IoT), Internet of everything (IoE), and Internet of small things (IoST), and/or 5G communication, to exchange and process information between distributed components such as objects. FIG.2is a diagram illustrating a configuration of an AI system according to one embodiment of the present disclosure. In an embodiment, a robot system may be implemented as an AI system capable of artificial intelligence and/or machine learning. Artificial intelligence refers to a field of studying artificial intelligence or a methodology for creating the same. Machine learning refers to a field of defining various problems dealing in an artificial intelligence field and studying methodologies for solving the same. The machine learning may be defined as an algorithm for improving performance with respect to any task through repeated experience with respect to the task. An artificial neural network (ANN) is a model used in machine learning, and may refer to a model with problem-solving abilities, composed of artificial neurons (nodes) forming a network by a connection of synapses. The artificial neural network may be defined by a connection pattern between neurons on different layers, a learning process for updating model parameters, and an activation function for generating an output value. The artificial neural network may include an input layer, an output layer, and/or optionally one or more hidden layers. Each layer may include one or more neurons, and the artificial neural network may include synapses that connect the neurons to one another. In the artificial neural network, each neuron may output a function value of an activation function with respect to the input signals inputted through a synapse, weight, and bias. The model parameters refer to parameters determined through learning, and may include weights of synapse connection, bias of a neuron, and/or the like. A hyperparameters may refer to parameters which are set before learning in the machine learning algorithm, and may include a learning rate, a number of repetitions, a mini batch size, an initialization function, and the like. One objective of training the artificial neural network is to determine a model parameter for significantly reducing a loss function. The loss function may be an indicator for determining an optimal model parameter in a learning process of the artificial neural network. The machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning depending on the learning method. Supervised learning may refer to a method for training the artificial neural network with training data that has been given a label. The label may refer to a target answer (or a result value) to be inferred by the artificial neural network when the training data is inputted to the artificial neural network. Unsupervised learning may refer to a method for training the artificial neural network using training data that has not been given a label. Reinforcement learning may refer to a learning method for training an agent defined within an environment to select an action or an action order for maximizing cumulative rewards in each state. Machine learning implemented as a deep neural network (DNN) including a plurality of hidden layers, among artificial neural networks may be referred to as deep learning and the deep learning is one machine learning technique. The meaning of machine learning may include deep learning. Referring toFIG.2, the robot system according to one embodiment may include an AI device210and an AI server220. In an embodiment, the AI device210may be the robot110, the control server120, the terminal130ofFIG.1, or the robot300ofFIG.3. The AI server220may be the control server120ofFIG.1. The AI server220may be a device for using a trained artificial neural network or training an artificial neural network using a machine learning algorithm. The AI server220may be composed of a plurality of servers to perform distributed processing. The AI server220may be included as a configuration of the AI device210, thereby performing at least some of artificial intelligence and/or machine learning processing with the AI device210. The AI server220may include a communicator221(or a communication device), a memory222, a learning processor225, a processor226, and/or the like. The communicator221may transmit or receive data with an external device such as the AI device210. The memory222may include a model storage223. The model storage223may store a model (or an artificial neural network223a) that is being trained or was trained by the learning processor225. The learning processor225may train the artificial neural network223ausing training data. The trained model may be used while mounted in the AI server220of the artificial neural network, or may be used while mounted in the external device such as the AI device210. The trained model may be implemented as hardware, software, and/or a combination of hardware and software. When a portion or all of the trained model is implemented as software, one or more instructions constituting the trained model may be stored in the memory222. The processor226may infer a result value with respect to new input data using the trained model, and may generate a response or control command based on the inferred result value. FIG.3is a block diagram illustrating a configuration of a robot according to one embodiment of the present disclosure.FIG.4is a diagram illustrating a learning operation of a mapping robot according to one embodiment of the present disclosure.FIG.5is a diagram illustrating an operation of localization of a service robot according to one embodiment of the present disclosure. The robot may be unable to properly recognize its current position or orientation for various reasons. If the robot does not accurately recognize its current position or orientation, the robot may not provide a service desired by the user. Embodiments of the present disclosure may provide methods by which the robot may effectively estimate its position or its pose with a simple motion that rotates in place (i.e., rotates at a specific point). Embodiments of the present disclosure may provide methods by which the robot may collect training data more efficiently. In the present disclosure, the ‘position’ of the robot may represent two-dimensional coordinate information (x, y) of the robot, and the ‘pose’ of the robot may represent two-dimensional coordinate information and orientation information (x, y, θ). In the present disclosure, the robot may be classified into any one of a mapping robot and a service robot according to a given role. The ‘mapping robot’ may refer to a robot for creating a map of the space or collecting training data according to a control signal from the control server120or the terminal130, and/or an input signal from a user. The ‘mapping robot’ may also be referred to as a ‘mapper.’ The mapping robot may have components with higher performance than the service robot described below. The ‘service robot’ may refer to a robot for providing a specific service according to a control signal from the control server120or the terminal130, and/or an input signal from the user. The ‘service robot’ may also be referred to as a ‘localizer’ from the point of view of localization. The service robot may have components with lower performance than the mapping robot described above. The robot300according to one embodiment may include a communicator310(or a communication device), an input interface320(or an input device), a sensor330(or a sensor device(s)), a driver340, an output interface350(or an output device), a processor370, and a storage380(or a memory). The robot300may further include a learning processor360configured to perform operations related to artificial intelligence and/or machine learning. The robot ofFIG.3may represent the mapping robot or the service robot. The communicator310may transmit or receive information or data with external devices such as the control server120or the terminal130using wired or wireless communication technology. The communicator310may transmit or receive sensor information, a user input, a trained model, a control signal, and the like with the external devices. The communicator310may include a communicator for transmitting or receiving data, such as a receiver, a transmitter, or a transceiver. The communicator310may use communication technology such as global system for mobile communication (GSM), code division multi access (CDMA), CDMA2000, enhanced voice-data optimized or enhanced voice-data only (EV-DO), wideband CDMA (WCDMA), high speed downlink packet access (HSDPA), high speed uplink packet access (HSUPA), long term evolution (LTE), LTE-advanced (LTE-A), wireless LAN (WLAN), wireless-fidelity (Wi-Fi), Bluetooth™, radio frequency identification (RFID), infrared data association (IrDA), ZigBee, near field communication (NFC), visible light communication, and light-fidelity (Li-Fi). The communicator310may use a 5G communication network. The communicator310may communicate with external devices such as the control server120and the terminal130by using at least one service of enhanced mobile broadband (eMBB), ultra-reliable and low latency communication (URLLC), or massive machine-type communication (mMTC). The eMBB is a mobile broadband service, through which multimedia content, wireless data access, and the like are provided. Improved mobile services such as hotspots and broadband coverage for accommodating the rapidly growing mobile traffic may be provided via eMBB. Through a hotspot, high-volume traffic may be accommodated in an area where user mobility is low and user density is high. Through broadband coverage, a wide-range and stable wireless environment and user mobility may be guaranteed. The URLLC service defines requirements that are far more stringent than existing LTE in terms of transmission delay and reliability of data transmission or reception. Based on such services, 5G services may be provided for, for example, production process automation at industrial sites, telemedicine, telesurgery, transportation, and safety. The mMTC is a transmission delay-insensitive service that requires a relatively small amount of data transmission. The mMTC enables a much larger number of terminals (than before) to access the wireless access networks simultaneously. The communicator310may receive a map of a space (or an area) or a trained model from the control server120, the terminal130, and/or another robot. The communicator310may provide the received map of the space or the trained model to the processor370or the learning processor360. The map of the space or the trained model may be stored in the storage380. The input interface320may obtain various types of data. The input interface320may include at least one camera for obtaining an image signal including an image or a video image, a microphone for obtaining an audio signal, a user interface for receiving information from a user, and/or the like. The camera of the input interface320may obtain (or receive) images of a surrounding environment of the robot300. The at least one camera may obtain a plurality of successive sequential images. The images obtained by the at least one camera may be provided to the processor370or the learning processor360. When the robot300is a mapping robot, the input interface320may include a 360 degree camera. When the robot300is the service robot, the input interface320may include a general front-facing camera. The input interface320may obtain (or receive) training data for training the artificial neural network, input data to be used when obtaining the output using the trained model, and the like. The input interface320may obtain raw input data. In this example, the processor370or the learning processor360may extract an input feature by preprocessing the input data. The sensor330may obtain (or receive) at least one of internal information of the robot300, surrounding environment information, or user information by using various sensors. The sensor330may include an acceleration sensor, a magnetic sensor, a gyroscope sensor, an inertial sensor, a proximity sensor, an RGB sensor, an illumination sensor, a humidity sensor, a fingerprint recognition sensor, an ultrasonic sensor, a microphone, a Lidar sensor, a radar, or any combination thereof. The sensor data obtained by the sensor330may be used for autonomous driving of the robot300and/or for generating the map of the space. The driver340may physically drive (or move) the robot300. The driver340may include an actuator or a motor that operates according to a control signal from the processor370. The driver340may include a wheel, a brake, a propeller, and the like, which are operated by the actuator or the motor. The output interface350may generate an output related to visual, auditory, tactile and/or the like. The output interface350may include a display for outputting visual information, a speaker for outputting auditory information, a haptic module for outputting tactile information, and/or the like. The storage380(or memory) may store data supporting various functions of the robot300. The storage380may store information or data received by the communicator310, and input information, input data, training data, a trained model, a learning history, and/or the like, obtained by the input interface320. The storage380may include a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, registers, a hard disk, and/or the like. The storage380may store the map of the space, the trained model, and/or the like, received from the communicator310and/or the input interface320. The map of the space or the trained model may be received in advance from the control server120or the like and stored in the storage380, and may be periodically updated. The learning processor360may train a model composed of an artificial neural network using training data. The trained artificial neural network may be referred to as a trained model. The trained model may be used to infer a result value with respect to new input data rather than training data, and the inferred value may be used as a basis for judgment to perform an operation. In an embodiment, when the robot300is the mapping robot, the learning processor360may train the artificial neural network using, as the training data, a set (or plurality) of reference images for a specific point and a global position or a global pose of the specific point, obtained by the input interface320. In an embodiment, when the robot300is the service robot, the learning processor360may determine the position or the pose corresponding to the query image, using the at least one query image obtained by the input interface320as input data for the trained model based on the artificial neural network. The learning processor360may perform artificial intelligence and/or machine learning processing together with the learning processor225of the AI server220ofFIG.2. The learning processor360may include a memory integrated into or implemented in the robot300. Alternatively, the learning processor360may also be implemented by using the storage380, an external memory directly coupled to the robot300, or a memory held in the external device. The processor370may determine at least one executable operation of the robot300, based on information determined or generated using a data analysis algorithm or a machine learning algorithm. The processor370may control components of the robot300to perform the determined operation. The operation of the processor370may be described with reference toFIGS.4and5. A mode of operation may vary according to whether the robot300is the mapping robot or the service robot. A learning operation by the mapping robot is first described with reference toFIG.4, and an operation of localization by the service robot is described with reference toFIG.5. Learning By the Mapping Robot A space (or an area) in which the robot300operates may be partitioned into a plurality of grids. Each of the plurality of grids may have the same shape and the same area. The shape or area of each grid may be variously selected according to, for example, the area or property of the space, a size or area of the robot300, or a design purpose. As one example,FIG.4shows a space S including sixteen grids. In another embodiment, the plurality of grids may include at least one grid having different shapes and/or different areas according to constraints on the physical space or the design purpose, for example. The processor370of the mapping robot may set a mapping path for collecting training data. As shown inFIG.4, the processor370may set the mapping path P to cover all the grids in the space S. However, embodiments of the present disclosure are not limited thereto, and various mapping paths that may cover all grids may also be set. As described above, the shape or area of the grid may be variously selected, and accordingly, the shape of the mapping path may also be appropriately changed. The processor370may control the driver340to move the robot300along the mapping path. The processor370may collect training data at specific points on the mapping path. In an embodiment, the input interface320of the mapping robot may include a 360 degree camera capable of obtaining an image spanning 360 degrees (or extending across 360 degrees). The processor370may receive, from the input interface, a reference image spanning 360 degrees (or covering 360 degrees at specific points on the mapping path. The specific points may be points on the mapping path that are spaced at predetermined intervals. Spacing between specific points may be variously selected according to the property of the space S and the design purpose, for example. The reference image spanning 360 degrees may be a set of a plurality of successive reference images covering a field of view at a predetermined angle. For example, as shown inFIG.4, the set of reference images may include six successive reference images, each covering a 60 degree field of view 1, 2, 3, 4, 5, 6. In an embodiment, the set of reference images may include all order and/or direction combinations of the reference images. For example, referring toFIG.4, the set of reference images is, in the counterclockwise direction, a first order (1→2→3→4→5→6), a second order (2→3→4→5→6→1), . . . , and a sixth order (6→1→2→3→4→5). This may be applied equally to the clockwise direction. The processor370may generate reference information on the reference images and store the generated reference information in the storage380. The reference information may include information on at least one of information on the angle covered by each reference image, information on the order and the direction of the reference images, the number of cameras used to obtain the reference images, the number of key features in the reference image, or the complexity of the reference image derived on the basis of the number of features. The processor370may transmit the reference information to the AI server220. The reference information provided to the AI server220may be provided to the service robot and used when determining a localization method of the service robot. The processor370may associate the set of the plurality of reference images obtained at specific points on the mapping path with a global position or pose at that specific point. The set of the plurality of reference images and the corresponding global position or pose may be used as training data for training the artificial neural network. The processor370or the learning processor360may train the artificial neural network based on the training data. In another embodiment, the mapping robot may collect training data and transmit the collected training data to the AI server220. In this example, training of the artificial neural network may be performed by the AI server220. The trained model may be trained to output a position or a pose corresponding to at least one image obtained at any point. The trained model may be used for localization of the service robot. The trained model may be implemented by deep learning. The trained model may be implemented by at least partially utilizing any one of trained models for relocalization, known to those skilled in the art, such as PoseNet, PoseNet+LS™, PoseNet+Bi-LS™, PoseSiamNet. According to embodiments of the present disclosure, the mapping robot may effectively collect the training data on the mapping path, instead of all the paths in the space. Thus, complexity of collecting the training data may be improved. Localization by Service Robot As described above, the processor370of the service robot may receive, from the AI server220, a trained model trained by the mapping robot or the AI server220. The received trained model may be stored in the storage380. Additionally, the processor370of the service robot may receive reference information on the reference images from the AI server220. The reference information may be generated by the mapping robot and provided to the AI server220. As shown inFIG.5, when localization is required, the processor370of the service robot may cause the driver340to rotate the robot300in place (i.e., at a same point). In an embodiment, the processor370may determine a rotation method of the robot300based on the reference information received from the AI server220. Determining the rotation method may include determining at least one of a rotation direction, a rotation speed, and/or a rotation angle. In one aspect, sets of reference images according to various orders and directions may have been used as training data. That is, the sets of reference images obtained by the mapping robot may have been trained to cover all possible rotation direction and rotation angle combinations of the service robot. In this example, the processor370of the service robot may rotate the robot300by any angle in any direction from any position. The processor370does not need to consider an initial position or an initial orientation for rotation. The processor370may freely rotate the robot300in place (or at a specific point). A standard scheme that may be commonly used by the service robot may be predetermined. In this example, the processor370may rotate the robot300according to the predetermined standard scheme (e.g., 360 degree rotation in a clockwise direction). In another aspect, the sets of reference images according to a limited order and direction may have been used as the training data. That is, the sets of reference images obtained by the mapping robot may have been trained to cover only a limited combination of the rotation direction and the rotation angle of the service robot. In this example, the processor370of the service robot may determine the rotation direction and the rotation angle of the robot300based on the reference information on the reference images. In this example, the rotation direction of the robot300may be determined according to the direction of the set of reference images used as the training data, and the rotation angle of the robot300may be determined according to the angle covered by each reference image or the like. In another aspect, the processor370may adjust the rotation angle and/or the rotation speed of the robot300according to specifications or the number of cameras provided in the input interface320. In an embodiment, the input interface320of the service robot may include a front-facing camera that is commonly used. The input interface320may include a single camera or may include a plurality of cameras, according to the specifications of the service robot. The processor370may receive, from the input interface320, a plurality of successive sequential images obtained by the input interface320during the rotation of the robot300. The number of sequential images obtained during the rotation of the robot300may be variously selected. In an embodiment, the number of sequential images may be determined according to, for example, the number of reference images in the set of reference images used as training data, the number of cameras used by the mapping robot to obtain reference images, the number or complexity of key features in the reference image. The processor370may estimate a position or a pose of the robot300based on a plurality of sequential images obtained during the rotation of the robot300. In an embodiment, the processor370may estimate the position or the pose of the robot300by inputting the plurality of sequential images to the trained model stored in the storage380. The trained model may output the position or the pose corresponding to the plurality of sequential images. The processor370may determine the position or the pose outputted by the trained model as the current position or the pose of the robot300. In another embodiment, the trained model may be stored at the AI server220. The processor370of the service robot may transmit the plurality of sequential images obtained during the rotation of the robot300to the AI server220having the trained model. The trained model of the AI server220may output a specific position or pose in a space, corresponding to the plurality of sequential images. The processor370may obtain, from the AI server220through the communicator310, the position estimated by the trained model of the AI server220. According to embodiments of the present disclosure, the service robot may effectively estimate its position or pose with only a simple motion of rotating in place (or at a specific point). Additionally, since localization is possible through the rotation in place, dangerous driving of a robot that does not identify its position may be avoided. FIG.6is a flowchart illustrating a learning method by a robot according to one embodiment of the present disclosure. The method shown inFIG.6may be performed by a mapping robot. In step S610, the mapping robot sets a mapping path for collecting training data. The mapping path may be set to cover all grids in the space. An appropriate mapping path may be selected according to the shape or the area of the grid. In step S620, the mapping robot may obtain a set of reference images spanning 360 degrees at a specific point on the mapping path. The set of reference images may be obtained by a 360 degree camera provided at the mapping robot. The set of reference images may include a plurality of successive reference images covering a field of view of a predetermined angle. In an embodiment, the set of reference images may include all possible order and/or direction combinations of the reference images. In step S630, the mapping robot may associate the set of reference images obtained at the specific point with a global position or a global pose of that point. In step S640, the mapping robot may store the set of reference images and the global position or global pose as the training data for training an artificial neural network. In step S650, the mapping robot may train the artificial neural network based on the training data. The trained model may be trained to output a position or a pose corresponding to at least one image obtained at any point. The trained model may be used for localization of the service robot. Step S650may be performed by a server other than the mapping robot. In this example, the mapping robot may transmit the stored training data to the server. FIG.7is a flow chart illustrating a method for localizing a robot according to one embodiment of the disclosure. The method shown inFIG.7may be performed by a service robot. In step S710, the service robot may receive a trained model from a server. The trained model may be trained by a mapping robot or server. The received trained model may be stored in the service robot. In step S720, the service robot may rotate in place (or at a specific point) for localization. The service robot may rotate in any direction at any position. In step S730, the service robot may obtain (or receive) a plurality of sequential images during the rotation. The plurality of sequential images may be obtained by a front-facing camera provided in the service robot. The number of sequential images obtained during one rotation of the service robot may be variously selected. In step S740, the service robot may estimate at least one of its position or its pose based on inputting the plurality of sequential images obtained during the rotation into the trained model. The trained model may output a position or a pose corresponding to the plurality of sequential images. The service robot may determine the position or the pose outputted by the trained model as its current position or pose. In at least one embodiment, step S710may not be performed. In other words, the trained model may be stored in the server. In this example, the service robot may transmit the plurality of sequential images obtained during the rotation to the server having the power model. The trained model of the server may output a specific position or pose in a space, corresponding to the plurality of sequential images. The service robot may receive, from the server, the position estimated by the trained model of the server. Example embodiments described above may be implemented in the form of computer programs executable through various components on a computer, and such computer programs may be recorded on computer-readable media. Examples of the computer-readable media may include, but are not limited to: magnetic medium such as hard disk, floppy disks, and magnetic tape; optical media such as CD-ROM disks and DVD-ROM disks; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and execute program instructions, such as ROM, RAM, and flash memory devices. The computer programs may be those specially designed and constructed for the purposes of the present disclosure or they may be of the kind well known and available to those skilled in the computer software arts. Examples of computer programs may include both machine codes, such as produced by a compiler, and higher level codes that may be executed by the computer using an interpreter. As used in the present disclosure (and in the appended claims), the terms “a/an” and “the” may include both singular and plural references, unless the context clearly states otherwise. Also, it should be understood that any numerical range recited herein is intended to include all sub-ranges subsumed therein (unless expressly indicated otherwise) and therefore, the disclosed numeral ranges include every individual value between the minimum and maximum values of the numeral ranges. The order of individual steps in process claims according to the present disclosure does not imply that the steps must be performed in this order; rather, the steps may be performed in any suitable order, unless expressly indicated otherwise. In other words, the present disclosure is not necessarily limited to the order in which the individual steps are recited. All examples described herein or the terms indicative thereof (“for example,” etc.) used herein are merely to describe the present disclosure in greater detail. Therefore, it should be understood that the scope of the present disclosure is not limited to the exemplary embodiments described above or by the use of such terms unless limited by the appended claims. Also, it should be apparent to those skilled in the art that various modifications, combinations, and alternations may be made depending on design conditions and factors within the scope of the appended claims or equivalents thereof. An aspect of the present disclosure is to provide methods for improving the complexity of a trained model for localization and for simplifying an operation of a robot for localization. Another aspect of the present disclosure is to provide methods for enabling a robot that does not identify its position to avoid dangerous driving. Aspects of the present disclosure is not limited to those mentioned above, and other aspects and advantages not mentioned above will be understood from the following description, and become more apparent from the exemplary embodiments. Moreover, aspects and advantages of the present disclosure may be realized by the means and combinations thereof indicated in claims. According to one embodiment of the present disclosure, a robot may be configured to rotate in place for localization, and estimate at least one of a position or a pose in a space, based on a plurality of sequential images obtained during the rotation. The position or pose of the robot may be estimated based on inputting the plurality of sequential images obtained during the rotation of the robot into a trained model based on an artificial neural network. According to one embodiment of the present disclosure, a robot may include an input interface configured to obtain an image of a surrounding environment of the robot; and at least one processor configured to rotate the robot in place for localization, and estimate at least one of a position or a pose in a space, based on a plurality of sequential images obtained by the input interface during the rotation of the robot. The position or pose of the robot may be estimated based on inputting the plurality of sequential images obtained during the rotation of the robot into trained model based on an artificial neural network. The trained model may output a position or a pose of the robot corresponding to the plurality of sequential images obtained during the rotation of the robot. The trained model may be trained using, as training data, a set of reference images obtained by a 360 degree camera at each of specific points in the space, and a global position or a global pose of each of the specific points. The set of reference images may include a plurality of successive reference images, each covering a field of view of a predetermined angle. The set of reference images may include all possible order and direction combinations of the reference images. The trained model may be a trained model stored in the robot or a trained model stored in a server. According to another embodiment of the present disclosure, a robot may include an input interface configured to obtain a set of reference images spanning 360 degrees, of a surrounding environment of the robot, and at least one processor configured to: associate the set of reference images obtained by the input interface at a specific point in a space with a global position or a global pose of the specific point, and store, as training data, the set of reference images and the associated global position or global pose. The at least one processor may be further configured to train a trained model based on an artificial neural network, on the basis of the training data. The trained model may be trained to output a position or a pose corresponding to at least one image obtained at the specific point, and the trained model may be provided to at least one service robot in the space. The at least one processor may be further configured to divide the space into a plurality of grids, set a mapping path to cover all of the divided grids, and move the robot along the set mapping path. The specific point may be a point on the mapping path. The set of reference images may include a plurality of successive reference images, each covering a field of view of a predetermined angle. The set of reference images may include all possible order and direction combinations of the reference images. The input interface may include a 360 degree camera. According to one embodiment of the present disclosure, a method for localizing a robot may include rotating the robot in place for localization; obtaining a plurality of sequential images during the rotation; and estimating at least one of a position or a pose of the robot in a space, based on inputting the plurality of sequential images obtained during the rotation into a trained model based on an artificial neural network. According to one embodiment of the present disclosure, a method of learning by a robot may include obtaining a set of reference images spanning 360 degrees, of a surrounding environment of the robot using a 360 degree camera at a specific point in a space; associating the obtained set of reference images with a global position or a global pose of the specific point; and storing, as training data, the set of reference images and the associated global position or global pose. The method may further include training a trained model based on an artificial neural network, on the basis of the training data. According to embodiments of the present disclosure, a mapping robot may collect training data more efficiently, thereby improving the complexity of learning. According to embodiments of the present disclosure, the robot may effectively estimate its position or pose with a simple motion of rotating in place. According to embodiments of the present disclosure, dangerous driving of the robot that does not identify its position may be avoided. Embodiments disclosed in the present disclosure will be described in detail with reference to appended drawings, where the same or similar constituent elements are given the same reference number irrespective of their drawing symbols, and repeated descriptions thereof will be omitted. As used herein, the terms “module” and “unit” used to refer to components are used interchangeably in consideration of convenience of explanation, and thus, the terms per se should not be considered as having different meanings or functions. In addition, in describing an embodiment disclosed in the present disclosure, if it is determined that a detailed description of a related art incorporated herein unnecessarily obscure the gist of the embodiment, the detailed description thereof will be omitted. Furthermore, it should be understood that the appended drawings are intended only to help understand embodiments disclosed in the present disclosure and do not limit the technical principles and scope of the present disclosure; rather, it should be understood that the appended drawings include all of the modifications, equivalents or substitutes described by the technical principles and belonging to the technical scope of the present disclosure. It will be understood that when an element is referred to as being “connected to,” “attached to,” or “coupled to” another element, it may be directly connected, attached, or coupled to the other element, or intervening elements may be present. In contrast, when an element is referred to as being “directly connected to,” “directly attached to,” or “directly coupled to” another element, no intervening elements are present. It will be understood that when an element or layer is referred to as being “on” another element or layer, the element or layer can be directly on another element or layer or intervening elements or layers. In contrast, when an element is referred to as being “directly on” another element or layer, there are no intervening elements or layers present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section could be termed a second element, component, region, layer or section without departing from the teachings of the present invention. Spatially relative terms, such as “lower”, “upper” and the like, may be used herein for ease of description to describe the relationship of one element or feature to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “lower” relative to other elements or features would then be oriented “upper” relative to the other elements or features. Thus, the exemplary term “lower” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Embodiments of the disclosure are described herein with reference to cross-section illustrations that are schematic illustrations of idealized embodiments (and intermediate structures) of the disclosure. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the disclosure should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments. Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. | 50,906 |
11858150 | Corresponding reference numbers indicate corresponding parts throughout the drawings. DETAILED DESCRIPTION Referring toFIG.1, a nipper of the present disclosure is indicated by10. In one example, the nipper may be used for cutting fishing line. The nipper10includes a pair of jaws12A,12B and a pair of levers14A,14B connected to the jaws for moving the jaws to cut an object between the jaws. The jaws12A,12B and levers14A,14B are pivotable about a pivot connection (defining pivot axis PA) including a fastener16connecting the jaws and levers. The jaws12A,12B extend forward from the pivot connection, and the levers14A,14B extend rearward from the pivot connection. The jaws12A,12B include pivot hubs20and blades22extending from the pivot hubs. The blades22include cutting edges22A movable upward and downward and arranged for cutting in a scissors motion responsive to upward and downward pivoting of the levers14A,14B. The cutting edges22A extend forward of the respective pivot hubs20and forward of the pivot axis PA. The blades22are arranged to bypass each other and move between open and closed positions in opposite opening and closing directions generally parallel to a cutting plane CP (FIG.3) and generally perpendicular to the pivot axis PA. Desirably, the cutting edges22A are less than 1.5 inches long, and more desirably less than 1 inch long (e.g., about 0.5 inch long). The jaw pivot hubs20are on opposite sides of the cutting plane CP and have key openings20A for connecting the jaws12A,12B to the levers14A,14B. In the illustrated embodiment, the key openings20A are generally rectangular and include generally linear edge portions and arcuate edge portions. The levers14A,14B include arms30sized and shaped for reception of fingers of a user to actuate the jaws12A,12B. The arms30include proximal ends connected to lever pivot hubs40, and include distal ends opposite the proximal ends. The arms30are paddle-shaped and have a length L (e.g., desirably less than 2.5 inches, more desirably less than 2 inches) and width W (e.g., desirably less than 2.5 inches, more desirably less than 1.5 inches) greater than the arm thickness T (e.g., desirably less than 1 inch, more desirably less than 0.5 inch). The arms30include finger beds32having finger press surfaces32A that face away from each other (upward and downward) and are sized and shaped to receive a finger to permit a user to press the arms toward each other to close the jaws12A,12B. In the illustrated embodiment, the finger beds32are formed separately from and connected to substructure of the arms30. For example, the finger beds32can be made of a polymeric material that is softer than and has a greater coefficient of friction than the material of the arm substructure (e.g., aluminum or plastic). The finger press surfaces32A of the finger beds32are contoured for reception of fingers. For example, the illustrated press surfaces32A are concave (broadly, “non-convex”). The press surfaces32A extend generally perpendicular to the cutting plane CP and the opening and closing directions of the jaws12A,12B. The cutting plane CP intersects and generally bisects the press surfaces32A. A maximum distance D1between the distal ends of the arms (when the cutting edges are in the closed arrangement) is desirably less than four times the thickness T of an arm, and more desirably less than three times the thickness T (e.g., with D1measuring less than 2 inches, and more desirably less than 1.5 inches). The finger beds32are sized and shaped for receiving opposing fingertips of the person (e.g., thumb and forefinger). The finger beds32are sized and configured, and arranged with respect to each other (e.g., spaced from each other in the open arrangement of the cutting edges22A), to permit the person to hold the nipper10between opposing fingertips and to pinch the first and second finger beds toward each other between their opposing fingertips to move the cutting edges22A from the open arrangement to the closed arrangement. Other configurations can be used without departing from the scope of the present disclosure. The levers14A,14B include pivot hubs40connected to the arms30. The lever pivot hubs40are configured for connecting the levers14A,14B to the jaws12A,12B. The arms30extend rearward of the pivot hubs40. The lever pivot hubs40include main bodies40A and keys40B extending inward from the main bodies. In the illustrated embodiment, the keys40B comprise protrusions having a generally cylindrical shape having a cross section closely resembling the key openings20A of the jaws12A,12B. In section, the keys40B have a generally rectangular shape with generally linear edge portions and arcuate edge portions. The keys40B are sized and shaped to closely conform to the key openings20A of the jaws12A,12B for keyed engagement of the keys with the key openings. The keys40B and key openings20A can be referred to broadly as keying structure. The reception of the keys40B in the key openings20A mates the respective levers14A,14B and jaws12A,12B and causes them to pivot conjointly about the pivot axis PA. The lever pivot hubs40are offset to opposite sides of the cutting plane CP. Other configurations (e.g., other types of keying structure) can be used without departing from the scope of the present disclosure. The lever pivot hubs40have openings permitting the fastener16to pass therethrough. As shown inFIG.3, the fastener16passes through the keyed connections of the first and second jaws12A,12B and levers14A,14B. The fastener16includes a first fastener portion16A and a second fastener portion16B threaded to the first fastener portion. Threading of the fastener portions16A,16B to each other causes heads of the fastener portions to push the lever pivot hubs40toward each other and thus pushes the jaw pivot hubs20toward each other. The arrangement is such that the jaw pivot hubs20are pressed against each other and are sandwiched by the lever pivot hubs40. The keys40B of the lever pivot hubs40are shorter than the thickness of the jaw pivot hubs20such that the lever pivot hubs are spaced from each other and do not obstruct the lever pivot hubs from pressing the jaw pivot hubs against each other. As seen inFIG.3, a gap is present between the lever pivot hubs40. The levers14A,14B are biased away from each other by a spring50such that the jaws12A,12B are normally open. In the illustrated embodiment, the spring50comprises a compression spring captured between the first and second arms30of the first and second levers14A,14B. The spring50is received over a protrusion52on an inner surface of the arm30of the first lever14A and is received in an opening54in the inner surface of the arm of the second lever. The protrusion is defined by a fastener52threaded in a threaded opening56in the arm30of the first lever14A. In assembly, the compression spring50can be installed between the levers14A,14B by passing a first end of the spring through the threaded opening56and then installing the fastener52in the threaded opening. The finger bed32can then be installed on the arm substructure to cover the threaded opening56. The nipper10includes a retainer60for maintaining the nipper in a closed configuration in which the jaws12A,12B are closed and the levers14A,14B are near each other. In the illustrated embodiment, the retainer60comprises a pivotable latch connected to the second lever14B by a threaded fastener64. An O-ring66is captured between the lever14B and the latch60in an annular recess in the lever to provide frictional resistance to the latch pivoting between latched (retaining) and unlatched (non-retaining) positions. When the levers14A,14B are pressed toward each other to close the jaws12A,12B, the latch60can be pivoted to the latched position by overcoming the frictional resistance of the O-ring66such that a catch60A engages a recess68in a stud70extending from the first lever14A. In the latched position, the catch60A in the recess68prevents the levers14A,14B from moving away from each other and thus holds the jaws12A,12B closed. The frictional resistance of the O-ring66maintains the latch60in the latched position. When a user desires to use the nipper10again, the latch60can be pivoted against the frictional resistance of the O-ring66to the unlatched position, and the user can permit the spring50to push the levers14A,14B away from each other such that the jaws12A,12B open. The latch60includes a pivot guide60B in the form of a protrusion (e.g., stud) receivable in an arcuate track74in an inner side of the arm30of the second lever14B. A first closed end of the arcuate track74defines the position of the latch60in the latched position. A second closed end of the arcuate track74defines the deployed position of a poker80. The retainer60includes a lanyard connector82including an opening configured for connecting the nipper to a lanyard (e.g., cord, strap, and/or clip, etc.) for stowing the nipper. The retainer60includes the poker80having a pointed free end for pushing paint out of an eyelet of a fishing hook to permit fishing line to be threaded through the eyelet. The poker80is shielded on opposite sides by first and second guards81. The poker80can be selectively deployed by pivoting the retainer60about the fastener to expose the poker for use. After a user locates an object (e.g., fishing line) to be cut in the jaws12A,12B, the user can press the levers14A,14B toward each other to cause conjoint pivoting of the levers and their respective jaws to move the jaws in a cutting motion. The cut free end of fishing line can be threaded through an eyelet fishing hook or lure after using the poker80to remove paint from the eyelet, if necessary. To assemble the nipper10, the keyed connections of the jaws12A,12B and levers14A,14B can be made, the fastener16can be passed through the keyed connections to maintain the keyed connections and secure the jaws and levers to each other, the spring50can be installed between the levers by passing the spring into the threaded opening56, and the spring can be retained between the levers at a desired spring preload by threading the fastener52into the threaded opening. The fastener16presses the jaws12A,12B against each other by sandwiching the jaws with the lever pivot hubs40. It will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims. The dimensions and proportions described herein are by way of example without limitation. Other dimensions and proportions can be used without departing from the scope of the present disclosure. As various changes could be made in the above constructions and methods without departing from the scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense. | 10,921 |
11858151 | DETAILED DESCRIPTION It will be readily understood that the components of the embodiments, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations in addition to the described example embodiments. Thus, the following more detailed description of the example embodiments, as represented in the figures, is not intended to limit the scope of the embodiments, as claimed, but is merely representative of example embodiments. Reference throughout this specification to “one embodiment” or “an embodiment” (or the like) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” or the like in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that the various embodiments can be practiced without one or more of the specific details, or with other methods, components, materials, et cetera. In other instances, well known structures, materials, or operations are not shown or described in detail to avoid obfuscation. Scissors have been a foundational tool in hair cutting for generations. More particularly, hair cutters (e.g., barbers, stylists, etc.) utilize scissors to cut off specified amounts of a customer's hair in order to achieve a desired length. Conventionally, the length of hair to be cut is generally estimated by the barber or stylist. For example, responsive to receiving a customer instruction (e.g., “I would like to cut off one inch of hair”), the hair cutter may estimate what one inch of hair corresponds to (e.g., by utilizing their fingers to partition out one inch of the customer's hair). Experienced hair cutters may have a good feel for what a particular measurement corresponds to with respect to hair length. However, it may be difficult to maintain a consistent estimate throughout the duration of the entire haircut. In some situations, a hair cutter may use a secondary tool (e.g., a ruler, etc.) to ensure that they are cutting off the specified amount of hair. However, utilization of the second tool may be burdensome to the hair cutter and may slow down the hair cutting process. Additionally, any additional object utilized to cut hair is another object that must be cleaned and sterilized in-between haircuts, which may be time-consuming and may have an adverse effect on customer volume. Accordingly, an embodiment provides a pair of scissors having measurements portions thereon. In an embodiment, the measurements portions may be located on an upper surface of the blade on each scissor member and extend along a length of each scissor blade. The measurement portions may contain ruler-like measurements in various metrics (e.g., inches, centimeters, etc.) that may aid a hair cutter in cutting precise lengths of hair. As discussed further herein, the type of measurement portion and implementation herein may vary. Additionally, in certain embodiments, the scissors may contain one or more integrated combed portions that may allow a hair cutter to more easily isolate portions of a customers' hair. The illustrated example embodiments will be best understood by reference to the figures. The following description is intended only by way of example, and simply illustrates certain example embodiments. Referring now toFIG.1, a pair of scissors10having measurement portions is illustrated. The scissors10may be comprised of a first scissor member11that is pivotally connected to a second scissor member12with a joint member13(e.g., a pin, a screw, etc.). The joint member13may connect the first scissor member11to the second scissor member12at specified connection portions14(illustrated inFIG.2) on each member. In an embodiment, each scissor member11,12may contain an upper surface15, a lower surface16(not pictured) opposite the upper surface15, an inner side17, an outer side18, a cutting edge19located along a length of the inner side17, and a gripping base20having gripping means to hold/secure the scissors10. In an embodiment, the gripping base20may be composed of a variety of different types of material (e.g., rubber, etc.). In an embodiment, the gripping base20may terminate in a finger loop21that may support a user's finger (as illustrated inFIG.1). In some embodiments, the finger loop21may contain padding (not illustrated) located on the inside of the finger loop21to make gripping of the scissors more comfortable. Additionally, in certain embodiments, one or both scissor members11,12may contain an integrated comb portion22. The integrated comb portion22may contain a plurality of teeth that may enable a hair cutter to more easily isolate and hold a certain amount of hair. In situations where each scissor member11,12contains an integrated comb portion22, the integrated comb portions22may be of the same or different length. In some configurations, when the scissors are in a closed configuration the integrated comb portions22on each scissor member11,12may substantially align to resemble a singular comb. For example, the scissors10illustrated inFIG.1provide an integrated comb portion22on each scissor member11,12and wherein the integrated comb portion22is a different length on each scissor member11,12(i.e., the integrated comb portion22is shorter on scissor member11than on scissor member12). In an embodiment, one or both scissor members11,12may contain a measurement portion23. In an embodiment, the measurement portion23may abut the cutting edge19and span a length of the inner portion17thereof. More particularly, the measurement portion23may span the entire length of the cutting edge19and terminate at the gripping base20. In an embodiment, the measurement portion23may contain a plurality of measurements in a predetermined measurement metric (e.g., inches, centimeters, etc.). Referring now toFIG.2, a segmented view of individual scissor members11and12is provided. As can be seen from figure, each of the scissor members11,12may contain each of the elements and portions as described above. Furthermore, scissor member11may be pivotally connected to scissor member12at a connection portion14by a joint member13. In an embodiment, the measurement portion23may be integrated into one or both of the scissor members11,12. Integration of the measurement portion23into the scissor members11,12may be accomplished via a conventional engraving means (e.g., etch engraving, laser engraving, etc.). In an embodiment, the engraving process may be facilitated at an established engraving factory, center, or production facility. In an embodiment, each of the scissor members11,12may be configured to have the same predetermined measurement metric (e.g., both scissor members have measurements in inches, both scissor member have measurement in centimeters, etc.). Alternatively, each of the scissor members11,12may be configured to have different predetermined measurement metrics, as illustrated inFIG.1(e.g., the measurement portion on one scissor member is configured to be in inches whereas the other measurement portion on the other scissor member is configured to be in centimeters). In situations where the measurement portion23is not integrated into the scissor members11,12, the measurement portion23may be attachable to the scissor members11,12at a measurement receiving portion (not pictured). In an embodiment, the measurement receiving portion may be an area that substantially corresponds to an area occupied by an integrated measurement portion. The attachable measurement portions may manifest in one or more different ways and may be attachable to the scissor members11,12at the measurement receiving portion via one or more different attachment means, as further described below. In one embodiment, the attachable measurement portion may be a sticker that may be attachable directly to the upper surface15of either of the scissor members11,12. In an embodiment, the measurements may be printed directly on the sticker and may contain virtually any type of measurement and may be printed in virtually any color. A user may buy these stickers individually (i.e., as a dedicated sticker pack) or may find these stickers in a pack along with the novel scissors as described herein. In an embodiment, the upper surface of the relevant scissor members11,12may contain one or more engraved delineations (e.g., engraved vertical lines, an outlined area, etc.). These delineations essentially outline the measurement receiving portion to a user on the scissor members11,12. More particularly, the delineations may help guide a user in placing the sticker onto the upper surface of either of the scissor members11,12by providing them with a more concrete idea of where to place the sticker. Further to the foregoing and with reference toFIG.3(A-B), a non-limiting example of an attachable measurement portion in the form of a sticker is provided. More particularly,FIG.3Aillustrates a sticker base30on which a removable sticker31is attached. In this situation, the removable sticker31contains printed measurements manifest as centimeters. A single blade32of a pair of scissors is also shown. The single blade32in this situation contains an outlined delineation area33, which may provide an indication to an user of the proper place to put the removable sticker31. Accordingly, once the user removes the removable sticker31from the sticker base30, they can place it on the delineation area33of the single blade32in order to form the product illustrated atFIG.3B. In another embodiment, the attachable measurement portion may be a thin patch (not pictured) having an upper and lower surface. The upper surface of the patch may contain the measurements (e.g., printed thereon) and the lower surface of the patch may contain a set of small hooks or fasteners (not pictured) that may interact with a corresponding set of hooks or fasteners on the measurement receiving portion to secure the attachable measurement portion to the measurement receiving portion. Such an implementation may exist on both scissor members11,12or may exist on only scissor member (e.g., the “top” scissor member12for which the upper surface of the cutting blade is not interfered with by the “bottom” scissor member11when the scissors10are in the closed configuration, etc.). In another embodiment, the measurement receiving portion may comprise a writing surface (not pictured). In an embodiment, the writing surface may be a blackboard material, a whiteboard material, or virtually any other type of applicable writing surface. In an embodiment, users of the scissors may utilize a writing implement (e.g., a marker, chalk, etc.) to make markings (e.g., measurements, etc.) on the measurement receiving portion. The various embodiments described herein thus represent a technical improvement to conventional scissor implementations. Specifically, the measurement portions integrated on the scissors or attachable to the scissors may enable a user to quickly take measurements of hair length without the need for an additional tool (e.g., a separate ruler, etc.). Such a technical improvement may increase hair cutting speed and may also ensure a consistent cut length throughout the entire hair cutting process. As used herein, the singular “a” and “an” may be construed as including the plural “one or more” unless clearly indicated otherwise. This disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limiting. Many modifications and variations will be apparent to those of ordinary skill in the art. The example embodiments were chosen and described in order to explain principles and practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. Thus, although illustrative example embodiments have been described herein with reference to the accompanying figures, it is to be understood that this description is not limiting and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the disclosure. | 12,674 |
11858152 | DETAILED DESCRIPTION OF THE EMBODIMENTS FIGS.1-3show different views of a hair cutting appliance10comprising a mounting assembly12and a cutting unit14. The cutting unit14is mounted on the mounting assembly12, and is configured to cut hairs. In this example, the cutting unit14comprises a stationary guard blade and a reciprocating cutter blade which are configured together to cut hairs. In other examples, the cutting unit may simply comprise one or multiple blades fixed within a guard. The mounting assembly12comprises a head16for receiving the cutting unit14and a body18for interfacing with a handgrip (not shown) by which to hold the mounting assembly12. The head16is mounted to the body18by a four-bar linkage20. The four-bar linkage20comprises a coupler link22, a frame link24and two arms26extending between the coupler link22and the frame link24(best shown inFIG.3), where each arm26is one bar of the four-bar linkage20, the coupler link22is one bar of the four-bar linkage20and the frame link24is one bar of the four-bar linkage20. The coupler link22is associated with the head16, and the frame link24is associated with the body18. In this example, the coupler link22is a part of the head16and the frame link24is a part of the body18. In other examples, the coupler link may be separate from the head, but may be configured to be fixedly attached to the head, and the frame link may be separate from the body, but may be configured to be fixedly attached to the body. The coupler link22is connected to the arms26at respective joins28, and the frame link24is connected to the arms26at respective joins28to form the four-bar linkage20. Each join28therefore provides a connection between two bars of the four-bar linkage20, and permits pivoting of one bar relative to the other bar about a linkage axis30through the respective join28. The linkage axes30through all of the joins28are parallel to one another to permit movement of the bars of the four-bar linkage in a plane perpendicular to the linkage axes30. The frame link24is connected to each arm26at respective frame joins28a, and the coupler link22is connected to each arm26at respective coupler joins28b. The arms26comprise two diverging strands to form a U-shape such that the two ends of each U-shaped arm26are connected to the coupler link22at two coupler joins28b, spaced apart along the an axis parallel to a linkage axis30. The centre of each arm26at the inflection point of the U-shaped arm26is connected to the frame link24at a single frame join28a. The coupler link22is therefore supported by the arms26at four coupler joins28bin total. In other examples, each arm may comprise a single strand to form an I-shape such that the coupler link is supported by the arms at only two coupler joins in total, or the arms may comprise more than two diverging strands so that the coupler link is supported by the arms at more than two coupler joins per arm. Each arm may have a different number of diverging strands to support the coupler link at, for example 3 or 5 coupler joins. In yet further examples, the arms may comprise two diverging strands in the form of a T, V or Y shape such that each arm supports the coupler link at two coupler joins. The arms26are permitted to pivot about linkage axes30(best shown inFIGS.2and3) passing through each join28, such that pivoting movement of an arm26relative to the frame link24rotates the coupler link22about a moving virtual pivot32(best shown inFIG.3). The moving virtual pivot is defined by the point at which a line drawn along an arm26and passing through the respective frame join28aand the respective coupler join28bmeets a line drawn along the other arm26passing through its respective frame join28aand the respective coupler join28b. Permitting rotation of the coupler link22(and therefore of the head16) about the virtual pivot32by means of this four-bar linkage20means that the point of rotation of the head16(i.e. the virtual pivot32) can be brought closer to the skin which provides for more comfortable conformance to the object being shaved. In this example, each of the coupler joins28bconnecting the coupler link22to the arms26are one-degree-of-freedom joins28b, such that there are four one-degree-of-freedom joins28bin total, which permit the coupler link22to pivot relative to the respective arm26about the respective linkage axis30. Each one-degree-of-freedom join28bpermits rotation about at least one axis (i.e. one-degree of freedom of rotation). The coupler link22is coupled to the arms26at the coupler joins28bby semi-annular bearing elements36mounted to the coupler link22comprising a semi-annular bearing surface (best shown inFIG.2) curved about the linkage axis30. There are four semi-annular bearing elements36in this example, and each arm26comprises a bearing recess38at each end of the U-shape (i.e. at each coupler join28b), with each bearing recess38being configured to receive a respective bearing element36. The bearing recesses38cooperate with the respective bearing elements36to permit pivoting movement between the coupler link22and the arm26about the respective linkage axis30. In this example, the frame joins28aare two-degree-of-freedom joins28awhich permit the arms26to further pivot relative to the frame link24about a rotation axis34(best shown inFIG.2). In this example, the rotation axis34is perpendicular to the linkages axes30. In some examples, the rotation axis may have a component in a direction perpendicular to the linkage axes, in other words the rotation axis is not parallel to the linkage axes30. The two-degree-of-freedom joins28atherefore permit relative rotation of two respective connected bars about at least two perpendicular axes (i.e. two degrees of freedom of rotation in a single join). Since the coupler link22is connected to both of the arms26, the arms26together with the coupler link22can pivot in unison about the rotation axis34. In other words, the coupler link22and the arms26can pivot about the rotation axis34without changing orientation relative to one another. The two-degree-of-freedom joins28ain this example are ball-and-socket joins28a(best shown inFIG.2). The frame link24is therefore coupled to the arms26by respective ball-and-socket joins28a, where each ball-and-socket join28acomprises a ball42and a socket44to receive the ball42. In this example, the ball42for each ball-and-socket join28ais mounted to the frame link24, where the balls42are spaced apart along the rotation axis34, and each arm26comprises a respective socket44. In some examples, the balls may be mounted on the arms, and the frame link may comprise the corresponding sockets, spaced apart along the rotation axis. In other examples, one ball may be mounted on the frame and the corresponding arm may comprise a socket, and one socket may be disposed in the frame, and the corresponding arm may comprise a ball, such that the four-bar linkage could only be assembled in one way. Since the arms26and the coupler link22are pivoted about the rotation axis34in unison, the absolute direction of the linkages axes30also rotates about the rotation axis34. Since the ball-and-socket joins28aare spaced apart long the rotation axis34, at any one position, the ball-and-socket joins28apermit pivoting movement about only the linkage axes30and the rotation axis34. FIG.4shows a first cutaway view of the hair cutting appliance10with one of the arms26removed. It shows a first biasing mechanism50which is configured to bias the coupler link22to a first equilibrium position about the virtual pivot32. The first biasing mechanism comprises a torsion spring52mounted to the coupler link22with an extension54of the torsion spring52connected to an arm26. Therefore, when the coupler link22is pushed by a force during use to rotate away from the first equilibrium position about the virtual pivot32, the torsion spring52urges the coupler link22back to the first equilibrium position when the force pushing it away from the first equilibrium position is removed. Referring back toFIG.3, in this example, one bearing element36comprises a bearing stop40which is configured to abut against an arm26to limit pivoting movement of the arm26relative to the coupler link22about the respective linkage axis30. In this example, the bearing stop40is configured to abut against the arm26at ±10 degrees from the first equilibrium position. It will be appreciated that, in other examples, the bearing stop may be configured to abut against the arm at any suitable angle to limit pivoting movement of the arm relative to the coupler link about the respective linkage axis. Limiting pivoting movement of the arm26also limits the rotation of the coupler link22relative to the frame link24, thereby limiting rotation of the head16relative to the body18about the virtual pivot32. In some examples, there may be more than one bearing stop. In other examples, the bearing stop may be disposed on the head. FIG.5shows a second cutaway view of the hair cutting appliance10with one of the arms26removed. It shows a second biasing mechanism60which is configured to bias the arms26and the coupler link22in unison to a second equilibrium position, about the rotation axis34relative to the frame link24. The second biasing mechanism60comprises a pair of leaf springs62mounted to the frame link24on either side of a respective frame join28a. The leaf springs62are spaced apart along an axis perpendicular to the rotation axis34, on either side of the rotation axis34. An extension of each leaf spring62is received in an arm26to bias the arm26to the second equilibrium position. Since the coupler link22is connected to the arm26in which the leaf springs62are received, and the other arm26is connected to the coupler link22, the coupler link22and both arms26are biased to the second equilibrium position by the leaf springs62. In this example, the arm26comprises a pair of protrusions64, each of which receives an extension of the respective leaf spring62hooked under the protrusion64between the frame link24and the protrusion64. In some examples, the extensions of the leaf springs may be fixed to the arms. In other examples, the leaf springs may be mounted to an arm and the extensions of the leaf springs received in the frame link to bias the arm to the second equilibrium position. In still other examples, each leaf spring may be received in, or mounted to, a different arm. The four-bar linkage20in this example comprises a body stop70configured to abut against the frame link24to limit pivoting movement of the arms26and the coupler link22in unison about the rotation axis34. In this example, the body stop70comprises the protrusions64of the arm26which are configured to abut the frame link24to limit pivoting movement. In this example, the protrusions64are configured to abut the frame link24at ±10 degrees from the second equilibrium position. It will be appreciated that the protrusions may be configured to abut the frame link at any suitable angle to limit pivoting movement about the rotation axis. In some examples, the body stop may be configured to abut the body to limit pivoting movement of the arms and the coupler link in unison about the rotation axis. In other examples, the body stop may be disposed on the body or the frame link, and may be configured to abut an arm. Although it has been described in the examples above that the frame joins28abetween the frame link24and the arms26are two-degree-of-freedom joins, and the coupler joins28bbetween the coupler link22and the arm26are one-degree-of-freedom joins, in other examples, the orientation of the arms may be inverted so that the frame joins may be one-degree-of-freedom joins, and the so that there may be only two coupler joins which may be two-degree-of-freedom joins. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the principles and techniques described herein, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope. | 12,832 |
11858153 | DETAILED DESCRIPTION OF THE INVENTION FIG.1shows a schematic perspective view of a hair cutting appliance10, particularly an electrically-operated hair cutting appliance10. The hair cutting appliance10may also be referred to as hair clipper or hair trimmer. The hair cutting appliance10may comprise a housing or housing portion12having a generally elongated shape. At a first end thereof, a cutting unit14may be provided. The cutting unit14may comprise a blade set16. The blade set16may comprise a movable blade and a stationary blade that may be moved with respect to each other to cut hair. At a second end of the housing portion12, a handle or grip portion18may be provided. A user may grasp or grab the housing at the grip portion18. The hair cutting appliance10may further comprise operator controls. For instance, an on-off switch or button20may be provided. Furthermore, a length adjustment control22may be provided at the housing12of the hair cutting appliance10. The length adjustment control22may be provided in case an adjustable spacing comb26is attached to the housing portion12of the hair cutting appliance10. InFIG.1, the adjustable spacing comb26is shown in a detached or released state. When the spacing comb26is detached from the hair cutting appliance10, a minimum cutting length may be achieved. When the spacing comb26is attached to the hair cutting appliance10, hairs can be cut to a desired length. FIG.2shows a partial perspective schematic illustration of a first end of a housing portion12of a hair cutting appliance10. Furthermore, an adjustable spacing comb26is shown in an insertion orientation with respect to the housing portion12. The housing portion12and the adjustable spacing comb26are shown in an exploded state. By way of example, the spacing comb26may comprise an attachment portion28which may comprise, for instance, sliding beams34-1,34-2. The attachment portion28may engage the housing portion12. More particularly, the attachment portion28may be attached to a mounting portion30of the housing portion12. To this end, the sliding beams34-1,34-2may be inserted into respective mounting slots38-1,38-2at the mounting portion30. The attachment portion28may further comprise at least one snap-on member36which may be provided at at least one of the sliding beams34-1,34-2, for instance. The snap-on member36may secure the spacing comb26in its mounted state. As can be further seen fromFIG.2, the spacing comb26may further comprise a toothed portion32including a plurality of comb teeth. Generally, the toothed portion32may comprise a slot in which the blade set16can be arranged in the attached state. In some Figures as shown herein, exemplary coordinate systems are shown for illustrative purposes. As used herein, an X-axis is assigned to a longitudinal direction. Further, a Y-axis is assigned to a lateral direction. Accordingly, a Z-axis is assigned to a vertical (height) direction. Respective associations of the axes/directions X, Y, Z with respective features and extensions of the comb can be derived from those Figures. It should be understood that the coordinate system X, Y, Z is primarily provided for illustrative purposes and not intended to limit the scope of the disclosure. This involves that the skilled person may readily convert and transform the coordinate system when being confronted with further embodiments, illustrations and deviating view orientations. Also, a conversation of Cartesian coordinate systems into polar coordinate system may be envisaged, particularly in the context of a circular or curved blade set. Reference is made toFIG.3andFIG.4, illustrating an exemplary embodiment of a spacing comb70in accordance with some aspects of the present disclosure. For illustrative purposes, also a blade set50is illustrated inFIG.3andFIG.4, wherein a respective housing and/or driving mechanism is omitted. As indicated further above, the blade set50comprises a guard blade52and a cutter blade54that are arranged to be moved with respect to one another to cut hair therebetween. The guard blade52may also be referred to as stationary blade. The cutter blade54may also be referred to as movable blade. The guard blade52is facing the skin of the user when a respectively equipped appliance is used to cut hair. Hence, generally, the cutter blade54is arranged between the guard blade52and a housing portion (reference numeral12inFIG.1) of the appliance. At the guard blade52, a series of teeth56having tips60is provided. At the cutter blade54, a series of teeth58having tips62is provided. In the exemplary embodiment of the blade set50ofFIG.3andFIG.4, the series of teeth56and the series of teeth58are, respectively, linearly arranged. Hence, a reciprocating movement between the guard blade52and the cutter blade54effects the cutting action between the teeth56and the teeth58. InFIG.3andFIG.4, coordinate systems X, Y, Z are illustrated. A main extension direction of the teeth56and the teeth58is parallel to the longitudinal direction (X-direction). A cutting plane jointly defined by the teeth56and the teeth58is basically parallel to a plane X-Y. The series of teeth56and/or the series of teeth58extend in a lateral direction (Y-direction). Perpendicular to the longitudinal direction (X-direction) and the lateral direction (Y-direction), a vertical direction (Z) is provided. A movement direction of the cutter blade54with respect to the guard blade52is parallel or nearly parallel to the lateral direction (Y-direction). Additional reference is made toFIG.5andFIG.6, further detailing the spacing comb70ofFIG.3andFIG.4in isolation. The comb70comprises a support frame72. Preferably, the support frame72defines a closed surrounding framework that supports comb teeth76. As can be best seen inFIG.5, a plurality of comb teeth76arranged in a series along the Y-axis is provided. The series of comb teeth76extends basically linear in the Y-direction. Further, a main extension direction of the comb teeth76is basically parallel to the longitudinal direction (X-axis). The comb teeth76of the spacing comb70are interrupted, i.e., separated into a frontal portion78and a rear portion80. Between the frontal portion78and the rear portion80, a blade slot84is provided that is arranged to accommodate the blade set50therein, refer also toFIG.3andFIG.4. The frontal portions78and the rear portions80of the comb teeth76are aligned. As discussed above, in alternative embodiments, no interrupting blade slot84between the frontal portion78and the rear portion80is provided. Rather, a non-interrupting blade recess may be formed of the bottom side of the comb teeth76 As indicated inFIG.4andFIG.5, a hair removal channel86is formed between the frontal portion78and the rear portion80of the comb teeth76. The blade slot84forms a top end of the hair removal channel86. The hair removal channel86is a laterally extending channel extending throughout the lateral extension of the comb70. In some sense, the design of the comb70is segmented into a front part88and a rear part90, refer also toFIG.4. The front part88involves the portion of the comb70that is placed in the mounted state in the vicinity of a frontal end of the housing portion12. The rear part90involves the portion of the comb70that is rearwardly offset from the front part88. As shown inFIG.4, the guard blade52and the cutter blade54of the blade set50extend into the blade slot84between the front part88and the rear part90of the comb70. The support frame72comprises a frontal connector bar100, a rear connector plate102, a first side bar104, and a second side bar106. The rear connector plate102is opposite to the frontal connector bar100. The side bars104,106connect the frontal connector bar100and the rear connector plate102. The rear connector plate102is rearwardly offset from the frontal connector bar100. As discussed above, at least in certain embodiments, the rear connector plate102may be referred to as rear connector bar. The rear connector plate does not necessarily have to be arranged as rear end portion of the comb70. Rather, the rear connector plate102is rearwardly offset from the frontal connector bar and may thus be arranged between the frontal end and the rear end of the structure forming the comb70. The frontal portions78of the comb teeth76extend from the frontal connector bar100. A main extension direction of the frontal portion78is aligned with the vertical direction (Z-axis). The rear portions80of the comb teeth76extend from the rear connector plate102. A main extension direction of the rear portions80of the embodiment shown inFIG.5andFIG.6is aligned with the longitudinal direction (X-direction). The comb teeth76do not connect the frontal connector bar100and the rear connector plate102as the blade slot84is formed therein. The frontal connector bar100, the rear connector plate102and the side bars104,106define a closed support profile. The rear connector plate102and the frontal connector bar100are generally spaced away from one another in the longitudinal direction. In accordance with a main aspect of the present disclosure, the frontal connector bar100is non-linear. As shown inFIG.3andFIG.5, the frontal connector bar100is rearwardly curved (concavely curved). Consequently, a central portion110of the frontal connector bar100is inwardly displaced from lateral portions112,114of the frontal connector bar. This has the effect that at the rear end of the front part88of the comb70a convex shape may be defined that facilitates hair removal. Consequently, the convex shape is not only provided at the rear end of the frontal connector bar100, but also, at least partially, at the rear ends of the frontal portions78of the comb teeth76. In certain embodiments, the frontal connector bar100is rearwardly curved towards the rear connector plate102. In the embodiment illustrated inFIG.5andFIG.6, the rear connector plate102of the support frame72forms part of a bracket120. The bracket120comprises a central portion122and side walls124,126. Hence, the bracket120is U-shaped, seen in a top view. The bracket120is provided with the rear connector plate102that has a basically flat shape. The central portion122may also be referred to as central wall. The central portion122and the side walls124,126define a stiffening wall structure adjoining the rear connector plate102. The side bar104extends between the frontal connector bar100and the side wall124that is coupled to the rear connector plate102. The side bar106extends between the frontal connector bar100and the side wall126of the rear connector plate102. Additional reference is made toFIG.7andFIG.8.FIG.7is a frontal view of the comb70.FIG.8is a perspective frontal top view of the comb70clearly showing a curvature of the frontal connector bar100that results in a concave frontal face and a convex rear face thereof. The comb70further comprises lateral teeth involving a first outer tooth132and a second outer tooth134. The outer teeth132,134are provided at respective lateral ends of the series of (standard) comb teeth76. The outer tooth132comprises a frontal portion136and a rear portion138. The outer tooth134comprises a frontal portion140and a rear portion142. As with the comb teeth76, also the outer teeth132,134are interrupted by the blade slot84. Generally, the outer teeth132,134are, in the lateral direction, thicker than the comb teeth76. Further, particularly in the longitudinal direction, the outer teeth132,134may protrude slightly beyond the comb teeth76. As can be best seen inFIG.5andFIG.8, in contrast to conventional comb designs, at the comb70, the side bars104,106do not form an integral portion of the outer teeth132,134. Hence, also at the outer teeth132,134, the blade slot84and, consequently, the hair removal channel86, is present. Consequently, in a lateral view, the hair removal channel86is not obstructed by the side bars104,106that connect the front part88and the rear part90. Rather, as can be seen inFIG.7, the side bars104,106are laterally displaced from the outer teeth132,134at respective lateral ends of the support frame72. The frontal portions78of the comb teeth76are provided with frontal tips148. The rear portions80of the comb teeth76are provided with frontal tips150. Connecting lines of the tips148and150, respectively, are basically parallel to connecting lines of the tips60of the guard blade teeth56and tips62of the cutter blade teeth58. Basically, the same applies to tips154of the frontal portions136,140of the outer teeth132,134, and to tips156of the rear portions138,142of the outer teeth132,134. At the rear end of the frontal portions78of the comb teeth76, rear faces152are defined. Similarly, at the rear end of the rear portions138,142of the outer teeth132,134, rear faces158are defined. The rear faces152,158define a basically linear/planar boundary region for the blade slot84and the hair removal channel86. By contrast, remaining sub-portions of the frontal portions78,136,140define an inwardly curved region that also forms a boundary for the hair removal channel86. The at least slightly inwardly curved (convexly shaped as seen from the adjacent housing or housing portion12of the appliance) region is referred to herein as first region164, refer toFIG.4and toFIG.9. The basically linear or planar region is referred to herein as second region166. The second region166maintains the desired parallel offset from the teeth56,58of the blade set50. The first region164is inwardly curved and therefore provides for an improved design of the hair removal channel86. FIG.9is a simplified cross-sectional lateral view (nearly half-section) wherein the blade set50including the guard blade52and the cutter blade54is indicated by dashed lines for illustrative purposes. Further, a distance between the rear end of the front part of the comb70and the opposite frontal end of the housing or housing portion12(indicated by dashed lines) is indicated by170. Preferably, the distance170is greater than 1.5 mm, preferably greater than 2.0 mm. At a top end of the housing portion12, a chamfered wall168is provided that may further enlarge the hair removal channel86. As discussed further above, the second region166is arranged to cooperate with the guard blade52and the cutter blade54of the blade set50to maintain the cutting performance. In addition, the first region164is specifically adapted to facilitate hair removal to avoid the clogging of hair clippings. Further, in exemplary embodiments, a vertical extension of the second region166(planar region) is in the range of 1.0 to 6.0 mm (millimeter), preferably in the range of between 2.0 and 5.0 mm. Accordingly, the range of the first region (curved region)164is in the range of 10.0 mm to 30.0 mm, preferably in the range of 15.0 mm to 25.0 mm. InFIG.9, there is further indicated an angle of inclination a (alpha) of the side bars104,106with respect to a top surface174jointly defined by the frontal portions78and the rear portions80of the comb teeth76. Preferably, the inclination angle α is in the range of between 30 and 80°, preferably in the range of between 40 and 60°. It will be appreciated by those skilled in the art that the side bars104,106may of course have a different shape, involving different angles, curved shapes, transitions, segments having a different inclination, etc. Also, in this way a lateral accessibility of the hair removal gap (channel)86may be achieved. InFIG.9, a front end of the comb70is indicated by176. Similarly, a rear end is indicated by178. A top end is indicated by180. The top surface174is facing the top end180. Hence, in certain embodiments, the rear connector plate102and the frontal connector bar100are generally spaced away from one another a direction that is basically parallel to the top surface174and basically perpendicular to the lateral direction (Y-direction). Further reference is made toFIG.10andFIG.11.FIG.10is a bottom view of a hair cutting appliance10to which a spacing comb70as discussed herein before is attached.FIG.11is a bottom view of the spacing comb70in isolation. InFIG.10, lateral ends of the comb70are indicated by182,184. Hence, the side bar104is arranged at the lateral end182. The side bar106is arranged at the lateral end184. Opposite to the first region164(FIG.4andFIG.9), also the housing portion12of the appliance10is at least slightly curved, refer to reference numeral202. That is, between the curvature202of the housing portion12and the first region (curved region)164of the comb70, two funnel portions190,192are formed. Narrow ends of the funnel portions190,192merge into one another adjacent to the central region110. Wide openings of the funnel portions190,192are facing away from one another at the lateral ends182,184of the comb70. The narrow sections of the funnel portions190,192are defined by a first clearance198between the first region164and the housing portion12of the appliance. The wide openings of the funnel portions190,192are defined by a respective second clearance200at the lateral ends182,184. InFIG.11, the opposite curvatures of the convex curvature202of the housing portion and a resulting concave curvature204(as seen from the outside, in front of the frontal tips148of the teeth76) of the comb70are indicated by curved lines. The above proposed design greatly improves the hair removal capacity at the interface between the housing portion12of the appliance and the front part88of the comb70. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope. | 18,440 |
11858154 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS It can be desirable to be able to determine a position or location of a personal care device on a body of a user by using data acquired by the device itself. Hairs growing from the human body have properties which vary depending on the part of the body from which they are growing. With knowledge of particular characteristics of hairs from various body parts, it is possible to identify a body part of a user based on data relating to the hairs growing from that body part. Referring to the drawings,FIG.1illustrates, schematically, an example of a personal care device100in accordance with embodiments of the invention. The personal care device100includes a personal care element102, an imaging device104and processing apparatus106. The invention may be embodied in any personal care device100which is capable of being used to perform a personal care activity on skin of a user. Thus, the personal care device100may, for example, comprise a hair removal device, such as an epilator or an intense pulsed light (IPL) device, a hair care device, such as a shaver, clippers, or a hair trimmer, a skin health analysis device, an electric massager, a phototherapy device or a pain relief device. Other personal care devices are also envisaged. The personal care element102is a portion of the personal care device100which performs the personal care activity, and it will be appreciated that the nature of the personal care element will depend on the nature of the particular personal care device. For example, in an epilator, the personal care element102may comprise a plurality of rotating disks or tweezer elements; in a shaver or hair trimming device, the personal care element may comprise one or more blades for cutting hair; and in a skin health analysis device, the personal care element may comprise an optical pick-up arrangement for analysing the skin. The imaging device104may be selected based on the nature of the personal care device100. In general, the imaging device104may comprise any type of imaging device capable of capturing image data relating to a portion of the user's body to which the personal care activity is being performed. In some embodiments, the imaging device104may comprise a device selected from a group comprising, but not limited to: an optical imaging device, a capacitive contact sensor, and an acoustic imaging device. An optical imaging device may include an arrangement having a light source, one or more optical elements for manipulating light from the light source, and an imaging sensor, such as a charge-coupled device (CCD) sensor, for detecting light reflected from a target (i.e. a body of the user). A capacitive contact sensor, or a capacitive displacement sensor, may generate an image of the body of the user based on contact made by the skin and/or hairs with the imaging device. An acoustic imaging device may generate an image of the body of the user using non-ionising laser pulses which are delivered to the body of the user. Ultrasonic waves which are generated by tissue in the body are then detected and analysed to generate an image. It will be appreciated that other imaging modalities (optical and otherwise) may alternatively be used. In some embodiments, multiple imaging modalities or imaging devices may be used to image the body of the user. The processing apparatus106may comprise one or more processors, processing devices and/or computing devices capable of performing processing tasks. The processing apparatus106is, in some embodiments, connected to the personal care element102and the imaging device104, and is configured to communicate with one or more of the personal care element and the imaging device. For example, the imaging device104may be controlled by the processing apparatus106to generate an image of (or image data relating to) a portion of a user's body to be acted upon (e.g. to be treated) by the personal care element102, and that image or image data may be processed by the processing apparatus. The processing apparatus106may also control the operation of the personal care element102and, as noted below, the processing apparatus may, in some embodiments, adjust an operating parameter of the personal care element in response to processing performed on the image data. In some embodiments, as discussed below, processing of image data may be performed by processing apparatus external to the personal care device100, for example by processing apparatus located remotely, in a computing device. FIG.2is a flowchart of an example of a method200operating a personal care device, such as the personal care device100. The method200comprises, at step202, obtaining, using the personal care device100, image data relating to a portion of a body of a user. The image data may be obtained using the imaging device104, using one or more of the various imaging modalities mentioned above. It is intended that the image data relates to a relatively small portion of the user's body, for example an area of skin of approximately 5 millimetres (mm) by 5 millimetres. In this way, sufficient image data may be obtained for subsequent processing, but the image captured is unlikely to be large enough for the user to be visually identified from the image. Therefore, the privacy of a user may be maintained. At step204, the method200comprises measuring, using the image data, a parameter relating to at least one hair growing from the portion of the body. The measuring204may be performed by processing apparatus106in the personal care device100or, as noted above, by a processor located remotely in a different computing device. The measuring204may include identifying, within the image data, one or more hairs that are growing out of the skin in the portion of the user's body that has been imaged by the imaging device104. Once one or more hairs have been identified, one or more parameters of one or more of the hairs are measured. Some properties of hairs can be representative of the body part from which they are growing. Thus, by measuring a parameter of a hair, it may be possible to determine a location of the hair on the body of the user. For example, it may be possible to determine the body part (e.g. forehead, cheek, chin, neck, shoulder, arm, armpit, chest, back, upper leg, lower leg, foot) of the user's body from which the hair is growing. In addition to measuring one or more parameters of a hair itself (e.g. of the hair shaft), the measuring204may, in some embodiments, include measuring one or more parameters of a hair follicle, or of an opening in the skin from which the hair has been removed. For example, a measure of surface density of hair in the area of skin imaged by the imaging device104may be obtained by measuring the number of hair follicles in a particular area, or the cross sectional area of the openings in the skin from which hairs grow. A measure of the cross sectional area of the openings in the skin may be equivalent to, or approximated by, a measure of the cross sectional area of hair shafts growing from the skin at the point where they emerge from the skin. As noted above, one or more of a large number of parameters of hair may be measured in accordance with the invention. In some embodiments, the processing apparatus106may measure all possible parameters of one or more of the hairs appearing in the image. In other embodiments, the processing apparatus106may measure a single parameter, or a subset of parameters, of one or more of the hairs if possible, and may measure one or more additional parameters if required. An example of a parameter of a hair that may be measured by the processing apparatus106is the hair shaft diameter. Hair growing from different parts of the body may have different diameters. For example, a hair growing from the scalp of a human (known as a terminal hair) may have a diameter which is relatively larger than the diameter of hair growing elsewhere on the human body (known as a vellus hair). A terminal hair may have a diameter of around 0.06 mm, while a vellus hair may have a diameter of around 0.03 mm. Since vellus hairs are so small, the imaging device104needs to have a resolution great enough to resolve such small hairs. A further example of a parameter of hair that may be measured, as noted above, is the surface density, or surface coverage, of hair, which can be determined using a measure of the cross sectional area of each hair emerging from the portion of skin in the image, or the cross sectional area of each opening in the skin from which a hair emerges (known as an infundibulum). In making such a measurement, the processing apparatus106may assume that the cross section of each opening and/or hair shaft is round. The cross sectional area of the openings, or hairs, may be calculated for a unit area, for example follicle area per centimetre squared (follicle area/cm2). A further example of a parameter of hair that may be measured according to the invention is a sum of the cross sectional areas of the hairs in the captured image. As noted above, it may be assumed that the cross section of each hair shaft is round. The sum of the areas of all of the hairs may be indicative of a body part from which they are growing. A further parameter of hair that may be measured according to the invention is the orientation of the hair emerging from the skin. The orientation of each hair may be measured relative to the personal care device100. Hairs emerging from a human body have a natural orientation relative to the body, depending on the portion of the body (e.g. the body part) from which the hair is growing. Therefore, the orientation of a hair emerging from the user's skin may be indicative of the body part from which the hair is growing. In some embodiments, the personal care device100may include components or a device (e.g. an accelerometer) for determining the orientation of the personal care device relative to the body of the user and/or relative to a fixed point (e.g. on Earth). Information from the accelerometer may be used to determine the orientation of the hairs relative to the user's body and/or relative to Earth. A further example of a parameter of hair that may be measured in accordance with the invention is the uniformity of the orientation, or the angular distribution, of hairs in the image. Hairs growing from the skin at some parts of the human tend to grow in generally the same direction (i.e. relatively uniformly). However, hairs growing from the skin at other parts of the human body tend to have a greater angular distribution and, therefore, less uniformity in their orientation. A further example of a parameter of hair that may be measured in accordance with the invention is a pigment of the hair. Hairs growing from some parts of the human body tend to have a particular pigment, or a relatively large difference from the pigment growing from other parts of the human body. For example, male facial hair (e.g. beard hair) had a large pigmentation difference from hair growing elsewhere on a male human body. Similarly, hair growing from a human leg tends to have a darker pigment than hair growing from other parts of the human body. It will be appreciated that hair parameters other than those discussed above may additionally or alternatively be measured in accordance with the invention. Once a measurement of a least one parameter of at least one hair has been made, the method further comprises, at step206, determining a position on the body of the portion of the body from which the image data is obtained, based on the measured parameter and a predetermined relation between the parameter and the position on the body. The determination of the position of said portion of the body on the body may be a determination of a body part wherein said portion of the body is present, and in such an embodiment said predetermined relation may be a predetermined relation between the parameter and specific parts of the body. Said determination of the position of the portion of the body on the body may be made by comparing the measured parameter with one or more look-up tables or databases in which said predetermined relation is stored. A look-up table and/or a database may be stored in a storage medium, such as a memory unit, which may form part of the personal care device100, or may be located remotely and accessible by the processing apparatus106. In some embodiments, one or more look-up tables may comprise generic, or universal, look up tables, which include generic data based on hair parameters of an average human being, or an average male or female human being. In some embodiments, the look-up tables may be compiled, adjusted or updated, and/or databases may be populated based on information acquired from a particular user as is discussed below. FIG.3is a flow chart of an example of a method300for operating a personal care device, such as the personal care device100. The method300includes, at stage302, performing a personal care activity to the body of the user. The particular personal care activity to be performed using the personal care device100at step302depends on the nature of the personal care element102. In some embodiments, the steps202,204and206may be performed while the personal care device100is being used to perform the personal care activity (step302). At step304, the method300further comprises, responsive to determining the position on the body of the portion of the body from which the image data is obtained, causing the personal care device to perform an action. Once the position of the portion of the body on the body has been identified at step206one or more actions may be performed by the personal care device100based on the identification. In some embodiments, an action performed by the personal care device100may include indicating to the user the position on the body of the portion of the body from which the image data is obtained, for example indicating to the user the determined body part or the nature of the determined body part. In some embodiments, the indication to the user may be made visually, using, for example, a series of light elements (e.g. LEDs) arranged in the shape of a human body, or arranged on a diagram of the human body on the personal care device100, by illuminating a particular lighting element which corresponds to the identified body part. In some embodiments, a visual indication may be made to the user by displaying on a display screen of the personal care device100or on another computing device, a textual or pictorial indication of the identified body part. In yet further embodiments, an audible indication may be provided to the user. For example, the processing apparatus106may generate a spoken indication of the identified body part which may be presented to the user via a speaker in the personal care device100or in another computing device. In some embodiments, the action performed in response to determining the position on the body, e.g. determining the body part (step304), may include adjusting an operating parameter of the personal care device100based on the determined position on the body or the determined body part. The personal care element102of the personal care device100may be caused to operate in accordance with one or more operating parameters which may depend on the nature of the personal care element, and/or on the nature of the user (e.g. male or female). However, the personal care element102may be caused to operate in accordance with one or more parameters which can be adjusted or tailored depending on the position on the body or the body part on which the personal care activity is being performed. For example, in an embodiment in which the personal care element102comprises a hair removal element, such as a photo-epilation element, an intensity and/or a wavelength of radiation to be emitted by the personal care element102may depend upon the type of hair to be treated and, therefore, on the body part from which the hair is growing. The treatment of relatively thick hair may require a greater intensity than relatively thin hair, for example. Similarly, a relatively greater intensity of light may be applied to a hair growing deeper in the skin of the body of the user than to a hair which is growing shallower in the skin. Thus, the processing apparatus106may adjust one or more parameters of the personal care element102based on the determination of the position on the body or the body part to be treated. In some embodiments, one or more optimum settings or parameters may be stored (for example in a storage unit of the personal care device100) which correspond to the various body parts. Thus, when a particular body part is identified in step206, the processing apparatus106may automatically adjust the operating parameters of the personal care element102to suit the identified body part, such that the personal care activity is performed in an optimised manner. In other embodiments, a user may manually adjust one or more parameters based on the identified body part. At step306, the method300may further comprise storing at least one of the image data, the measured parameter and the determined body part in a storage device. By storing the data acquired by the personal care device100, the data may be used at a later time, or for an alternative purpose. For example, the stored data may be used to generate a log or a record of the locations on the user's body where the personal care activity has been performed. Such information may be used to provide an indication to the user of parts of his or her body that have not yet been treated, or portions of his or her body to which the personal care activity has been performed too many times. In this way, the personal care device100may provide feedback in real time and/or subsequently to the user regarding a quality of the user's performance of the personal care activity. The method300may further comprise, at step308, transmitting at least one of the image data, the measured parameter and the determined position on the body of the portion of the body from which the image data is obtained to a computing device. In other words, at least some of the data acquired by the personal care device100may be transmitted, for example via a wired or wireless connection, to a connected or remote computing device. The transmitted data may be used for further analysis, for example by user or by a third party. The data may, in some embodiments, be translated to a medical professional, particularly if the personal care activity involves monitoring or measuring a parameter of the user's skin. Additionally or alternatively, the transmitted data may be delivered to a manufacturer of the personal care device100to enable the manufacturer to determine the extent to which the personal care device is used on different parts of the body and/or to enable the manufacturer to track the usage of the device so that the user may be informed when the personal care element102is due to be replaced, for example. As will be appreciated, the personal care device100may be any type of personal care device suitable for use in performing a personal care activity on the surface (i.e. the skin of a body of a user). One particular example embodiment of such a personal care device400will now be discussed with reference toFIG.4.FIG.4shows a personal care device400having a body portion402and a handle portion404. The personal care device400may be the same as the personal care device100discussed above. The body portion402is configured to house components and circuitry of the device, and the handle portion404provides a means by which a user can hold the device400during use. The personal care device400includes the personal care element102, the imaging device104and the processing apparatus106. In the embodiment shown inFIG.4, the personal care device400also includes a control unit406which is configured to control an operation of the personal care element102. However, it will be appreciated that, in some embodiments, the control unit406may form part of the processing apparatus106. In other embodiments, as discussed below with reference toFIG.5, the personal care device100,400may include the control unit406, but not the processing apparatus106. The personal care device400is shown, inFIG.4, to be in contact with a surface (i.e. skin)408of the body of a user and hairs410are shown to be extending from the skin. While, in this embodiment, the personal care device400is in contact with the skin408, in other embodiments, a separator element formed, for example, on a bottom surface of the person care device, may prevent the body portion402from contacting the skin such that the personal care device is spaced apart from the skin during use. In this embodiment, the imaging device104is an optical imaging device, which includes an illumination source412, one or more optical elements414, and a sensor or detector416. The illumination source412may include one or more light emitting diodes (LEDs) for illuminating a portion of the skin408to be imaged. The one or more optical elements414may serve to focus or direct light from the illumination source412or light reflected from the skin408. The optical elements414may include one or more lenses such as a singlet lens. The sensor416is configured to receive light reflected from the skin408, and, in this embodiment, comprises a charge-coupled device (CCD) sensor. It will be appreciated that, in other embodiments, the sensor416may comprise a different type of sensor, such as a conductive contact sensor or an acoustic detection sensor. However, in general, the sensor416should be capable of detecting and imaging objects (i.e. hairs) to a desired resolution. In general, a field of view of the imaging device104may be approximately 25 mm2(e.g. 5 mm×5 mm) and, in some embodiments may be larger, for example approximately 100 mm2(e.g. 10 mm×10 mm). An image resolution of the imaging device104may, in some embodiments, be up to 3 micrometres per pixel (μm/pixel) and, in other embodiments, may be lower, for example, 2 μm/pixel. The imaging device104may, in some embodiments, have an image pixel dimension of at least 1,000 pixels×1,000 pixels (i.e. 1 megapixel (MP)), and in other embodiments, the imaging device may have an image pixel dimension of, for example, 5 megapixels or 10 megapixels. It will be understood that the parameters of the imaging device104need only be sufficient to resolve hairs emerging from skin of a user which, typically, have a minimum diameter of around 15 μm. In use, a user may hold the handle404of the personal care device400, and position the personal care device on a portion of his or her body (e.g. a thigh). The user may operate the device400to perform a personal care activity (e.g. photo-epilation), for example by pressing an “on” or “start” button (not shown). During operation the processing apparatus106or the control unit406may cause the personal care element102to perform the desired personal care activity (in this case photo-epilation) on the hairs410growing from the skin408of the user. During operation of the personal care element102, the processing apparatus106or the control unit406may cause the imaging device104to obtain image data from the portion of the skin408being treated, or the portion of skin adjacent to the portion being treated. Image data acquired by the imaging device104may be delivered to the processing apparatus106to be analysed. Specifically, a parameter of at least one hair410for which image data has been acquired may be measured by the processing apparatus106. The processing apparatus106may then consult a look-up table, which may be stored within the personal care device100,400, and a determination may then be made as to the body part from which the hairs410are growing (in this case, the thigh of the user). The image data acquired by the imaging device104, the measured hair parameter and/or the determined body part may then be stored a memory. Additionally, or alternatively, based on the determined body part, the processing apparatus106may optimize or at least improve, performance of the personal care activity by adjusting one or more operating parameters of the personal care element102, such as a radiation intensity. The look-up tables may include generic data relating to parameters of hairs rightly to be found on the particular body parts of a human, based on average, or documented data. However, a more accurate optimisation of a personal care device100,400may be achieved by obtaining user-specific data relating the parameters of hairs to different parts of a particular user's body. This may be achieved, in some embodiments, by acquiring image data from the various parts of a user's body before operation of the personal care element102. For example, a user may operate the personal care device100,400in a so-called “calibration” mode or “mapping mode” in which the imaging device104is operated by the processing apparatus106, but the personal care element102is not operated. During such an operational mode, the imaging device104of the personal care device100,400may be configured to capture a series of images (or sets of image data) from various body parts of the user, as the user moves the device over his or her body. For each body part, the processing apparatus106may calculate an average value for each hair parameter that may be measured, over the series of images for that body part. The average value for each parameter for a particular body part may then be stored in a memory unit, and used as a look-up table for that particular user, the next time that device is used. In some embodiments, the personal care device100,400may indicate to the user, for example using a display device or display screen, an area or portion of his or her body to which the device100,400should be moved to obtain the next set of images. Once images have been obtained from a sufficient number of body parts (which may be determined by the device or by the user), the user may put the device100,400into a so-called ‘treatment mode’, such that the imaging device104no longer acquires image data, but the processing apparatus106operates the personal care element102. While, in some embodiments, the processing apparatus106configured to measure the hair parameter from the acquired image data is located within the personal care device100,400, in other embodiments, the processing may be performed by a processing apparatus located externally from the personal care device. An example of such an embodiment is shown inFIG.5.FIG.5shows a personal care system500comprising a personal care device502and a processing unit106. The personal care device502is similar to the processing devices100,400in that it includes a personal care element102and an imaging device104for obtaining image data relating to a portion of a body of a user. However, the personal care device502does not include the processing apparatus106. Rather, the personal care device502includes a controller504, which may be similar to the controller406discussed above. The controller is configured to control the personal care element102and imaging device104. In this embodiment, the processing apparatus106may comprise a processing unit of a remote computing device, such as a smartphone or tablet computer, for example. Signals may be transmitted between the personal care device502and the processing apparatus106via transmission means506, which may be a wired or wireless connection. Using the system500, a user may operate the personal care device502to perform a personal care activity and to acquire image data from his or her skin, and a smartphone or tablet computer, for example may be used to perform the processing of the image data and the measurement of hair parameters. The smartphone or tablet computer may also be used to present information to the user and to transmit control instructions to the control unit504of the personal care device502in response to the determination of the body part to be treated. A further aspect of the invention relates to a machine-readable medium which may comprises instructions which, when executed by a processor or processing apparatus, cause the processor or processing apparatus to perform the methods disclosed herein.FIG.6shows an example of a machine readable medium602with a processor604, which may be similar to the processing apparatus106. The machine-readable medium602comprises instructions which, when executed by the processor604, cause the processor to obtain image data relating to a portion of a body of a user; measure, using the image data, a parameter relating to at least one hair growing from the portion of the body; and determine a position of the portion of the body on the body based on the measured parameter and a predetermined relation between the parameter and the position on the body. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope. | 29,682 |
11858155 | DETAILED DESCRIPTION OF SOME EMBODIMENTS The present invention will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present inventions. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention. Further, it should be emphasized that several inventive techniques are described, and embodiments are not limited to systems implanting all of those techniques, as various cost and engineering trade-offs may warrant systems that only afford a subset of the benefits described herein or that will be apparent to one of ordinary skill in the art. Embodiments of an electronic razor capable of collecting hairs as they are cut during the shaving process is introduced herein. Efficient collection of hairs as they are cut may reduce the post-shaving cleanup required by the user and may provide the user with an easy and clean method for disposing of the collected hairs. Some embodiments include an electronic razor including one or more razor blades, one or more razor blade motors, a suction fan, a suction fan motor, a rechargeable battery, and a hair collection compartment. In some embodiments, the rechargeable battery operates at least the suction fan motor and the motor for the one or more razor blades. In some embodiments, the electronic razor includes a frame to which components of the electronic razor are coupled. In some embodiments, the electronic razor includes a processor and a memory. In some embodiments, the suction fan may be used to draw trimmed hairs into the hair collection compartment during the shaving process. In some embodiments, the suction fan may be positioned within the electronic razor beneath the one or more razors blades, wherein the suction fan is positioned such that a front of the suction fan faces towards the one or more razor blades. Various types of suction fans with different fan blades (e.g., shape, angle, size) may be used. In some embodiments, the suction fan motor rotates the suction fan, driving air particles forward, causing the density of air particles and hence air pressure in front of the suction fan to increase. As the pressure in front of the suction fan increases, a vacuum drawing air from the front of the suction fan towards the back of the suction fan is created due to the difference in air pressure across the suction fan. As the pressure drop across the suction fan increases, the volume flow rate of air across the suction fan and suction strength increases as well. In some embodiments, the suction fan positioned within the electronic razor beneath the one or more razor blades generates a vacuum drawing trimmed hard inwards, towards the electronic razor. In some embodiments, the trimmed hard drawn inwards are deposited into the hair collection compartment positioned beneath the suction fan. In some embodiments, the hair collection compartment may be detachable from the electronic razor and may be removed from the suctioning mechanism for emptying. In some embodiments, the hair collection compartment may include a door that may be opened when emptying of the hair collection compartment is required. Various configurations of the electronic razor are possible. Some embodiments include a suctioning mechanism for electronic razors including a suction fan motor, a suction fan, and a hair collection compartment. In some embodiments, the suctioning mechanism include a rechargeable battery for operating at least the suction fan. Various types of suction fans with different fan blades (e.g., shape, angle, size) may be used. In some embodiments, the suctioning mechanism may be installed within an electronic razor beneath one or more razor blades of the electronic razor, wherein the suctioning mechanism is positioned such that the front of the suction fan faces towards the one or more razor blades. In some embodiments, the suction fan rotates when the one or more razor blades operate, forcing air from the back of the suction fan towards the front of the suction fan, thereby increasing the density of air particles and hence the air pressure in front of the suction fan and decreasing the pressure behind the suction fan. The air pressure drop across the suction fan creates suction that draws air and trimmed hairs inwards towards the back of the suction fan and into the hair collection compartment positioned beneath the suction fan. In some embodiments, the hair collection compartment may be detachable from the suctioning mechanism and may be removed from the suctioning mechanism for emptying. In some embodiments, the hair collection compartment may include a door that may be opened when emptying of the hair collection compartment is required. Various configurations of the suctioning mechanism for electronic razors are possible. In some embodiments, the suctioning mechanism further includes a processor and one or more sensors. In some embodiments, the electronic razor, the suction fan of the electronic razor, and/or the suctioning mechanism may automatically activate when the one or more razor blades of the electronic razor makes contact with skin, thereby maximizing battery efficiency. In some embodiments, the electronic razor and/or the suctioning mechanism may further include one or more sensors for detecting contact between the one or more razor blades and skin. In some embodiments, the one or more sensors may be coupled to a processor that processes the sensor data and determines when there is contact between the one or more razor blades and skin. In some embodiments, the suction fan of the electronic razor and/or the suctioning mechanism may operate when the electronic razor operates. In some embodiments, the electronic razor, the suction fan of the electronic razor, and/or the suctioning mechanism may automatically activate when motion of the electronic razor is detected or contact between the hand of the user and the frame of the electronic razor is detected. In some embodiments, the electronic razor and/or the suctioning mechanism may further include one or more sensors for detecting motion of the electronic razor (e.g., when a user picks up the electronic razor for shaving or when specific movements associated with shaving are performed) or contact between the hand of the user and the frame of the electronic razor. In some embodiments, the one or more sensors may be coupled to a processor that processes the sensor data and determines when there is motion or contact between the hand of the user and the frame of the electronic razor. In some embodiments, the electronic razor and/or the suctioning mechanism may further include one or more sensors for detecting one or more fill levels (e.g., empty, low, high, full) of the hair collection compartment. In some embodiments, the one or more sensors may be coupled to a processor that processes the sensor data and determines the fill level of the hair collection compartment. In some embodiments, the processor activates a light on the electronic razor and/or the suctioning mechanism when the hair collection compartment requires emptying. In other embodiments, other methods of notifying a user that the hair collection compartment requires emptying may be used (e.g., a sound, displaying a message on a graphical user interface of the electronic razor, etc.). In some embodiments, the one or more sensors includes an IR transmitter and receiver positioned near the top and on opposite sides of the hair collection compartment such that the transmitter is directly in the line of sight of the receiver. The processor may detect that the hair collection compartment is full when the receiver does not receive the IR signal from the transmitter for a predetermined amount of time as the hair blocks the IR receiver from receiving the signal. In some embodiments, additional IR transmitter and receiver pairs may be positioned at different heights along the length of the hair collection compartment such that multiple fill levels may be detected. In some embodiments, the electronic razor may include various different types of one or more razor blades. For example, the electronic razor may include carbon, stainless steel, or titanium (e.g., for longer lasting razor blades) razor blades. In some embodiments, the speed of the one or more razor blades may be adjusted. In some embodiments, the electronic razor and/or the suctioning mechanism may further include one or more sensors for detecting a length or coarseness of hair. In some embodiments, a processor processes the sensor data and determines the length or coarseness of hair and autonomously adjusts the speed of the one or more razor blades based on the length or coarseness of the hair being trimmed or shaved (e.g., increasing the speed for longer or coarser hair). In some embodiments, the sensor measures electric current provided to the one or more razor blade motors and the processor may estimate the length or coarseness of hair based on the electric current drawn by the one or more razor blade motors. In some embodiments, a higher electric current may be indicative of longer length of hair or increased coarseness as the one or more razor blade motors requires more power to maintain a particular razor blade speed due to the additional resistance from the longer length of hair or increased coarseness of hair. In some embodiments, the processor adjusts the suction fan motor speed based on the coarseness or length of hair. For example, the processor may increase the suction fan motor speed for longer lengths of hair as more hair falls at once. In some embodiments, the electronic razor may further include an internal compartment for shaving fluid and a means for automatically dispersing the shaving liquid onto the skin during the shaving process. In some embodiments, a controlled liquid release mechanism may administer the shaving fluid at a predetermined time or at intervals during the shaving process. The internal compartment for shaving liquid may be refilled autonomously by the electronic razor or another device (e.g., charging station of the electronic razor) or by the user. In some embodiments, the internal compartment may be loaded with a disposable or refillable pod filled with a fluid (e.g., shaving fluid, aftershave fluid, sanitizing fluid, etc.). In some embodiments, a similar internal compartment may be included for aftershave fluid. In some embodiments, the same internal compartment may be used for shaving fluid and aftershave fluid. For example, a shaving fluid pod may be inserted into an internal compartment. A mechanism may disperse the shaving fluid from the pod onto skin before or during the shaving process. After shaving, an aftershave fluid pod may be inserted into the same or a different internal compartment. The same or a different mechanism may disperse the aftershave from the pod onto the skin. In some embodiments, a user may manually disperse shaving fluid or aftershave fluid onto the skin by manually pressing a button or something of the sort. In some embodiments, the electronic razor may include a means for sanitizing any cuts that occur during the shaving process. In some embodiments, sensors may detect a cut on the skin during the shaving process and administer a means for sanitizing the cut such as by dispersing a sanitizing cream or the like onto the cut. In some embodiments, the means for sanitizing any cuts may be contained in an internal compartment of the electronic razor that may be refilled autonomously by the electronic razor or another device or by the user. In some embodiments, sanitizing fluid is administered from a disposable or refillable sanitizing fluid pod loaded into the internal compartment. In some embodiments, a user may manually disperse sanitizing fluid onto the skin by manually pressing a button or something of the sort. In some embodiments, the electronic razor may include a compartment for storing one or more razor blades. In some embodiments, the electronic razor may include a mechanism for autonomously changing the one or more razor blades after a predetermined amount of time or after a predetermined number of uses of the electronic razor. In some embodiments, the electronic razor notifies the user that the one or more razor blades requires replacement after, for example, a predetermined amount of time or a predetermined number of uses of the electronic razor. The electronic razor may notify the user by various means, such as illuminating a light, generating a sound, or displaying a message on a graphical user interface of the electronic razor or an application paired with the processor of the electronic razor. In some embodiments, the application paired with the processor of the electronic razor may be used by the user to order replacement razor blades or may autonomously order replacement razor blades at, for example, predetermined time intervals. FIG.1Aillustrates an example of an electronic razor including razor blades100, sensor101for detecting contact between razor blades100and skin, gear box102, razor blade motor103, hair collection compartment104including sensor105for detecting fill level, filter106, suction fan107, suction fan motor108, air outlets109, processor110, memory111, and battery112, according to some embodiments. Razor blade motor103drives gears of gearbox102and subsequently razor blades100through connectors113that interface with the gears of gearbox102on a first end and are coupled to razor blades100on a second end. In some embodiments, hair collection compartment104may be detachable from frame114of the electronic razor or may include a door115that is opened to empty the contents. FIG.1Billustrates another example of an electronic razor including the same components as the electronic razor inFIG.1Ain addition to a fluid compartment116for holding shaving fluid117. A controlled liquid release mechanism may administer the shaving fluid117at a predetermined time or at intervals during the shaving process. FIG.1Cillustrates another example of an electronic razor including the same components as the electronic razor inFIG.1Ain addition to a compartment118for storing new razor blades119. A mechanism may autonomously change the razor blades100with the new razor blades119after a predetermined amount of time or after a predetermined number of uses of the electronic razor. FIG.2illustrates an example of a flow path of air (indicated by the arrows), according to some embodiments. Suction fan motor108drives suction fan107. Suction fan107generates a vacuum that sucks air in through razor blades100. The air travels past the enclosed gear box102and razor blade motor103into hair collection compartment104, through filter106and is expelled through air outlets109. Trimmed hair follows the flow path of air until hair collection compartment104. The hair remains in hair collection compartment104as it cannot flow past filter106. In some embodiments, the processor of the electronic razor may be wirelessly connected with an application of a communication device, as described herein. In some embodiments, the processor of the electronic razor may be wirelessly connected with a processor of another electronic device on a shared network, as described herein. In some embodiments, the processor of the electronic razor may be wirelessly connected with a home control unit, the home control unit wirelessly connected with processors of other electronic devices, as described here.FIG.3Aillustrates an example of a communication device300.FIG.3Billustrates and example of another electronic device301, such us an electronic coffee maker, and a network302that may be shared between the electronic device301and the electronic razor.FIG.3Cillustrates an example of a home control unit303and processors304of other electronic devices305. In some embodiments, the electronic razor includes a processor that learns over time when to autonomously activate the electronic razor based on use history of the electronic razor. For example, if a user consistently activates the electronic razor Monday morning at 7:00 AM, the processor may learn over time to autonomously activate the electronic razor a couple minutes before 7:00 AM such that is ready for use by the user. In some embodiments, the processor of the electronic razor may learn preferred settings of a user. In some embodiments, the processor may learn preferred settings of the electronic razor associated with coarseness or length of hair (e.g., estimated using a sensor of the electronic razor as described above), day of the week, or another variable. For example, the processor may learn to operate the razor blades at a first particular speed when shaving hair stubble and a second particular speed with shaving a thick beard. In some embodiments, electronic razor settings may include a razor blade motor speed, a razor blade motor speed for different coarseness of hair, a razor blade motor speed for different lengths of hair, a suction motor speed, a suction motor speed for different coarseness of hair, a suction motor speed for different lengths of hair, an electronic razor use schedule, and a razor blade replacement schedule. In some embodiments, the user may provide preferred settings to the processor of the electronic razor using a graphical or other type of user interface of the electronic razor or an application of a communication device (e.g., mobile phone, smart watch, tablet, laptop, specialized computer, remote control, etc.) wirelessly connected with the processor of the electronic razor. An example of a graphical user interface of an application of a communication device that may be paired with a processor of an electronic device is described in U.S. patent application Ser. No. 15/272,752 (U.S. Pat. No. 10,496,262) and Ser. No. 15/949,708 (U.S. Patent Application No. 2018/0232134), the entire contents of which is hereby incorporated by reference. In some embodiments, the processor of the electronic razor may be wirelessly connected with at least one other processor of an electronic device. In some embodiments, the two or more connected processors of different electronic devices collaborate by sharing intelligence. For example, the processor of the electronic razor may be connected with a processor of an electronic alarm clock. The processor of the electronic alarm clock may collaborate with the processor of the electronic razor by sharing alarm settings and status with the processor of the electronic razor such that the processor may autonomously activate the electronic razor at a time when the user rises from sleep. In another example, the processor of the electronic razor may be connected with a processor of an electronic shower and may share its status with the processor of the electronic shower such that the processor of the electronic shower may prepare a shower for the user during the shaving process of the user, the shower being ready for the user immediately after shaving. In yet another example, the processor of the electronic razor may be connected with a processor of an electronic coffee maker and may share its status with the processor of the electronic coffee maker such that the electronic coffee maker may brew coffee during the shaving process of the user, the coffee being ready by the time the user enters the kitchen. Examples of collaborative methods for electronic devices are described in U.S. patent application Ser. Nos. 15/981,643, 15/986,670 and 15/048,827 (U.S. Pat. No. 9,661,477), the entire contents of which are hereby incorporated by reference. In some embodiments, the processor of the electronic razor may be wirelessly connected with a home control unit. In some embodiments, a processor of one or more other electronic devices may be connected with the home control unit. In some embodiments, processors of electronic devices share their intelligence with the home control unit. In some embodiments, the home control unit provides instructions to the processors of electronic devices based on at least a portion of intelligence shared with the home control unit. For example, the processor of the electronic razor may share its status with the home control unit. Given an active status of the electronic razor, the home control unit may instruct a processor of an electronic shower to prepare a shower for the user or may instruct a processor of an electronic coffee maker to brew coffee for the user. In some instances, the home control unit may ask the user for a confirmation prior to providing an instruction to an electronic device. Examples of a control system for managing one or more autonomous electronic devices are described in U.S. patent application Ser. Nos. 16/130,880 and 16/245,998 (U.S. Pat. No. 11,144,056), the entire contents of which are hereby incorporated by reference. In some embodiments, the electronic razor further includes a charging station for recharging its rechargeable battery. The reader should appreciate that the present application describes several independently useful techniques. Rather than separating those techniques into multiple isolated patent applications, applicants have grouped these techniques into a single document because their related subject matter lends itself to economies in the application process. But the distinct advantages and aspects of such techniques should not be conflated. In some cases, embodiments address all of the deficiencies noted herein, but it should be understood that the techniques are independently useful, and some embodiments address only a subset of such problems or offer other, unmentioned benefits that will be apparent to those of skill in the art reviewing the present disclosure. Due to costs constraints, some techniques disclosed herein may not be presently claimed and may be claimed in later filings, such as continuation applications or by amending the present claims. Similarly, due to space constraints, neither the Abstract nor the Summary of the Invention sections of the present document should be taken as containing a comprehensive listing of all such techniques or all aspects of such techniques. It should be understood that the description and the drawings are not intended to limit the present techniques to the particular form disclosed, but to the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present techniques as defined by the appended claims. Further modifications and alternative embodiments of various aspects of the techniques will be apparent to those skilled in the art in view of this description. Accordingly, this description and the drawings are to be construed as illustrative only and are for the purpose of teaching those skilled in the art the general manner of carrying out the present techniques. It is to be understood that the forms of the present techniques shown and described herein are to be taken as examples of embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed or omitted, and certain features of the present techniques may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the present techniques. Changes may be made in the elements described herein without departing from the spirit and scope of the present techniques as described in the following claims. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,”, “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing steps A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor1performs step A, processor2performs step B and part of step C, and processor3performs part of step C and step D), unless otherwise indicated. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. Unless otherwise indicated, statements that “each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every. Limitations as to sequence of recited steps should not be read into the claims unless explicitly specified, e.g., with explicit language like “after performing X, performing Y,” in contrast to statements that might be improperly argued to imply sequence limitations, like “performing X on items, performing Y on the X'ed items,” used for purposes of making claims more readable rather than specifying sequence. Statements referring to “at least Z of A, B, and C,” and the like (e.g., “at least Z of A, B, or C”), refer to at least Z of the listed categories (A, B, and C) and do not require at least Z units in each category. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. Features described with reference to geometric constructs, like “parallel,” “perpendicular/orthogonal,” “square”, “cylindrical,” and the like, should be construed as encompassing items that substantially embody the properties of the geometric construct, e.g., reference to “parallel” surfaces encompasses substantially parallel surfaces. The permitted range of deviation from Platonic ideals of these geometric constructs is to be determined with reference to ranges in the specification, and where such ranges are not stated, with reference to industry norms in the field of use, and where such ranges are not defined, with reference to industry norms in the field of manufacturing of the designated feature, and where such ranges are not defined, features substantially embodying a geometric construct should be construed to include those features within 15% of the defining attributes of that geometric construct. The terms “first”, “second”, “third,” “given” and so on, if used in the claims, are used to distinguish or otherwise identify, and not to show a sequential or numerical limitation. | 28,739 |
11858156 | DETAILED DESCRIPTION The present disclosure is primarily aimed at providing a lubricating strip for a razor cartridge, which has two layers having different colors and divided into a plurality of shaving sections, the lubricating strip provided with various indicators indicating a usage state of the razor cartridge. It is also an object of the present disclosure to provide a lubricating strip for a razor cartridge providing an indication of a pre-use condition of the razor cartridge. Hereinafter, some embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, like reference numerals designate like elements, although the elements are shown in different drawings. Further, in the following description of some embodiments, a detailed description of known functions and configurations incorporated therein will be omitted for the purpose of clarity and for brevity. In describing the components of the embodiments according to the present disclosure, various terms such as first, second, i), ii), a), b), etc., may be used solely for the purpose of differentiating one component from the other, not to imply or suggest the substances, the order or sequence of the components. Throughout this specification, when a part “includes” or “comprises” a component, the part is meant to further include other components, not to exclude thereof unless specifically stated to the contrary. In addition, the terms width direction, height direction, and length direction as used herein refer to the direction along the width of a lubrication strip, the direction along the height thereof, and the direction along the length thereof, respectively. For example,FIG.4illustrates the x-axis, y-axis, and z-axis directions that correspond to the width direction, the height direction, and the length direction of the lubricating strip, respectively. FIG.1is a perspective view of a razor cartridge1according to at least one embodiment of the present disclosure. FIG.2is a cross-sectional view of the razor cartridge1taken on the line II-II ofFIG.1, according to the embodiment of the present disclosure. The razor cartridge1may include a lubricating strip110, a blade housing120, a guard130, a cap140, and one or more cutting blades150as shown inFIG.1, and also a trimming blade160as shown inFIG.2. The blade housing120may accommodate one or more cutting blades150(may be referred to as “the cutting blade” hereinafter) having a cutting edge152. The guard130may be located in front of the cutting blade150. Specifically, the guard130may be disposed on the upper surface of the blade housing120, to which the cutting edge152of the cutting blade150is directed. During shaving, the guard130may stretch the skin before the cutting blade150cuts the hair. This erects the hair to be perpendicular to the skin surface and further assists the cutting blade150in cutting the hair. The guard130may have an elastic member to effectively stretch the skin. The cap140may be located at the rear of the cutting blade150on the upper surface of the blade housing120. The guard130and the cap140may contact the user's skin when shaving, thereby defining a shaving plane. The lubricating strip110may be disposed on the upper surface of the blade housing120, and may apply a lubricating component to the skin when shaving. To this end, the lubricating strip110may be arranged in an area to be in contact with the skin. Specifically, the lubricating strip110may be disposed adjacent to one or more of the guard130and the cap140. InFIGS.1and2, the lubricating strip110is illustrated as being provided in the cap140, but the present disclosure is not limited thereto. For example, in another embodiment, the lubricating strip110may be provided only in the guard130or may be provided in both the guard130and the cap140. The cutting blade150may be accommodated on one side of the blade housing120and may have the cutting edge152for cutting hair. Specifically, the cutting edge152of the cutting blade150may be configured to cut the hair during the primary shaving. The cutting edge152may face the top surface of the blade housing120. The trimming blade160may be accommodated on the other side of the blade housing120, and may have a trimming edge162for cutting the hair during trimming shaving. The trimming edge162may face the bottom surface of the blade housing120, which is opposite the facing direction of the cutting edge152of the cutting blade150. As shown inFIG.2, the cap140may include a first support bar142and a second support bar144. The first support bar142may be positioned in front of the lubricating strip110in the width direction. Specifically, the first support bar142may be located between the lubricating strip110and the rearmost cutting blade150A. The first support bar142may be located at the rear in the height direction of the lubricating strip110based on the top exposure surface of the lubricating strip110and the second support bar144. The first support bar142may be completely covered by the lubricating strip110. In this case, between the rearmost cutting blade150A and the lubricating strip144, the skin can be prevented from being caught by the first support bar142when shaving, whereby the razor cartridge1can glide well by the lubricating strip110behind the rearmost cutting blade150A. In addition, the skin will be in contact with the lubricating strip110right after the rearmost cutting blade150A passes the skin, thus improving the lubrication performance by the lubricating strip110. The second support bar144may be located at the rear in the width direction of the lubricating strip110. The second support bar144may be located at the rear in the height direction of the lubricating strip110based on the top exposure surface of the lubricating strip110. Specifically, at least a portion of the second support bar144may be exposed to the outside when the second support bar144is not covered by the lubricating strip110. The second support bar144according to at least one embodiment of the present disclosure is located forwardly of the first support bar142in the width direction, and thus, as the lubricating strip110is worn out, the second support bar144may be in direct contact with the skin. Then, the second support bar144, in cooperation with the guard130located in front thereof, can define the shaving plane of the cutting edge152, whereby preventing the shaving plane from declining excessively low, and thus, saving the skin from being cut by the cutting edge152. For example, an embodiment may be considered where the second support bar144is not exposed to the outside and the shaving plane is defined by the lubricating strip110that becomes relatively low in height when the lubricating strip110is excessively worn flat to a third shaving layer A3which will be described. In that case, the exposure of the cutting edge152may be salient, which makes the skin susceptible to cuts by the cutting edge152when shaving. However, the present disclosure is not bound by specific illustrations of the configurations of the first support bar142and the second support bar144. For example, according to another embodiment, both the first support bar142and the second support bar144may be completely covered by the lubricating strip110, or the first support bar142and the second support bar144may be configured to be partially exposed to the outside, respectively. FIG.3is a front view of a lubricating strip110for a razor cartridge according to at least one embodiment of the present disclosure. FIG.4is a perspective view of a lubricating strip110for a razor cartridge according to at least one embodiment of the present disclosure. As shown inFIGS.3and4, the lubricating strip110may include a first layer112, a second layer114, and a support portion116. The first layer112may include a first lubricating material which may have a first color. The second layer114may be disposed under the first layer112and may include a second lubricating material. The second lubricating material may have a second color different from the first color. The first color and the second color may be configured to be complementary to each other for intuitive recognition by the user, but the present disclosure is not limited thereto. The first lubricating material and the second lubricating material may include a water-soluble polymer and a water-insoluble polymer. The water-soluble polymer, which is respectively more soluble in water, is a component that performs lubricating performance substantially on the lubricating strip110. The higher the proportion of the water-soluble polymer, the better the lubrication performance of the lubricating strip110is, which may accelerate the wear of the lubricating strip110. The water-insoluble polymer is relatively insoluble in water and serves to maintain the top shape of the lubricating strip110. The higher the ratio of the water-insoluble polymer, the better the durability of the lubricating strip110is, which may degrade the lubrication performance of the lubricating strip110. Therefore, in the lubricating strip110, the weight ratio of the water-soluble polymer of the upper layer of the first lubricating component may have a larger value than that of the water-soluble polymer of the lower layer of the second lubricating material. For example, when the water-soluble polymer and the water-insoluble polymer have the total weight of 100%, the weight ratio of the water-insoluble polymer of the first lubricating material maybe 10% to 40%, and the weight ratio of the water-soluble polymer of the first lubricating material maybe 60% to 90%. The second lubricating material may have 30% to 50% of weight ratio of the water-insoluble polymer and 50% to 70% of weight ratio of the water-soluble polymer. This configuration improves the lubrication performance of the first layer112which is frequently in contact with the skin and enhances the durability of the second layer114that underprops the first layer112. However, the present disclosure is not limited thereto, and the weight ratio of the water-soluble polymer of the first lubricating material may have the same value as that of the water-soluble polymer of the second lubricating material. This leaves difference only in the amount of the master batch mixed with each lubricating component or the type of the master batch between the first lubricating material and the second lubricating material, which can provide an advantage in the manufacturing process of the lubricating strip110. For example, in the preparation of the first lubricating material and the second lubricating material, the water-soluble polymer and the water-insoluble polymer may undergo a common mixing process, and then, only the mixing process of the master batch may be performed separately. The water-insoluble polymer of the first lubricating material and the water-insoluble polymer of the second lubricating material may include one or more of polystyrene (PS), polypropylene (PP), polyethylene (PE), thermoplastic elastomer (TPE), acrylonitrile butadiene styrene (ABS), or polycarbonate (PC). The water-soluble polymer of the first lubricating material and the water-soluble polymer of the second lubricating material may include one or more of polyethylene oxide (PEO), polyvinyl pyrrolidone (PVP), polyacrylamide (PAM), polyvinyl imidazoline (PVI), polyvinyl alcohol (PVA), polysulfone (PSU), polyhydroxyethyl methacrylate (PHEMA), or polyethylene glycol (PEG). The first lubricating material and the second lubricating material may include a lubrication performance enhancer. The lubrication performance enhancer of the first lubricating material and the second lubricating material may include one or more of a super absorbent polymer (SAP) or a polyalkylene oxide (PAO). Specifically, SAP and PAO included in the lubrication performance enhancer may help dissolve the water-soluble polymer by absorbing water around the lubricating strip110. The lubrication performance enhancers of the first lubricating material and the second lubricating material may each have a weight ratio of 0.1% to 10%. At least one of the first lubricating material or the second lubricating material may include a master batch. The master batch is a coloring raw material for coloring the plastic and may be included in the first lubricating material and the second lubricating material to have a first color and a second color, respectively. However, the present disclosure is not limited thereto, and only one of the first and second layers may include the master batch, and the other may not include thereof. In this case, the layer without the master batch may have a white color. The first layer112and the second layer114may be manufactured by a method of extrusion or injection, but the present disclosure is not limited thereto. The first layer112and the second layer114may at least partially form boundary lines that are not parallel to a width direction of the lubricating strip, on a cross-section of the lubricating strip110cut in the width direction (direction perpendicular to the longitudinal direction) of the lubricating strip110. For example, as shown inFIG.3, the first layer112and the second layer114have a left boundary line B1of a positive slope with respect to a straight line parallel to the width direction of the lubricating strip110. The first layer112and the second layer114have a right boundary line B2of a negative slope with respect to the straight line parallel to the width direction of the lubricating strip110. The left boundary line B1and the right boundary line B2meet at a top point118of the second layer114. The lubricating strip110according to at least one embodiment of the present disclosure features that the first layer112and the second layer114have such boundary lines as configured to be non-parallel to the width direction of the lubricating strip110, whereby displaying a shape-changing cross-section of the lubricating strip110in response to increased degree of usage thereof. A detailed description in this regard will be presented in relation toFIG.5. AlthoughFIGS.3and4illustrate the boundary lines between the first layer112and the second layer114as having a triangular profile facing the top of the lubricating strip110, the present disclosure is not limited thereto. Various embodiments of the profile of the boundary lines between the first layer112and the second layer114are described in relation toFIGS.8A to8D. Referring again toFIGS.3and4, the first layer112and the second layer114include a first shaving layer A1, a second shaving layer A2, and a third shaving layer A3. Specifically, the first layer112and the second layer114may have their first shaving layer A1, second shaving layer A2, and third shaving layer A3arranged to be distinguished from each other in a direction parallel to the height direction of the lubricating strip110. The first shaving layer A1may include the first layer112. In particular, the first shaving layer A1may include only the first layer112and may not include the second layer114. The second shaving layer A2may be located below the first shaving layer A1and may include the first layer112and the second layer114. The third shaving layer A3may be located below the second shaving layer A2and may include the second layer114. In particular, the third shaving layer A3may include only the second layer114and may not include the first layer112. In the first shaving layer A1, the first layer112may be configured to be removed by the first use of the lubricating strip110. Here, the first use refers to the use of the razor from start to finish of shaving for the first time. Thus, first use will typically be made of a plurality of strokes, although the present disclosure is not limited thereto. For example, the first use may be made of one stroke, depending on the type of shaving. In the first shaving layer A1, the first layer112may have a sufficient degree of thickness or solubility so that it can be removed by first use. Specifically, the first layer112in the first shaving layer A1may have a sufficient degree of thickness or solubility to fade away until the user finishes shaving in the first use of the razor. The first layer112in the first shaving layer A1may have a sufficient degree of solubility or thickness so that it can be removed by the first use. By first use, as the first layer112is removed from the first shaving layer A1, the first shaving layer A1may also be removed, thereby revealing the second shaving layer A2. In this case, the first shaving layer A1including only the first layer112may display the first color alone, but the second shaving layer A2including the first layer112and the second layer114may display the first and second colors together. Therefore, the user can recognize that the razor cartridge is in an unused condition by checking the lubricating strip110marked with only the first color. Conversely, the user can recognize that the razor cartridge has been used at least once by checking the lubricating strip110that is marked with the first and second colors together. Thus, the first shaving layer A1according to at least one embodiment of the present disclosure may serve as an indicator for informing the user that the razor cartridge1is in an unused condition. For the indicator function of the first shaving layer A1, in the unused condition, most of the area of the lubricating strip110is preferably indicated by the first color of the first layer112. Accordingly, prior to using the razor cartridge1, with the lubricating strip110mounted to the razor cartridge1, the first layer112may be configured to have its top exposure surface occupy 95% to 100% of the top exposure surface of the lubricating strip110. For example, as in the embodiment shown inFIG.8A, a lubricating strip310may have a first layer312and a second layer314, wherein the first layer312has a curved top exposure surface, and the first layer312and the second layer314have a concave down profile of boundary so that some of the second layer314in second shaving layer A2may be exposed in a state that the lubricating strip310is mounted to the razor cartridge. Specifically, the second layer314of the second shaving layer A2may have its portion exposed at both sides in the width direction of the lubricating strip310. However, even in this case, the portion occupied by the second layer314of the second shaving layer A2is very small in the entire top exposure surface of the lubricating strip310which thereby continues to offer the indicator function intact for indicating that the first shaving layer A1is in an unused condition. Referring back toFIGS.3and4, the top exposure surface of the lubricating strip110may have a round shape that includes curved surfaces. The round shape of the top exposure surface of the lubricating strip110causes the first layer112to have its central area protruded relative to the peripheral area thereof in the first shaving layer A1. As a result, the first layer112which substantially performs the lubricating function may have better contact with the skin, thereby further improving the function of applying the lubricating component of the lubricating strip110to the skin. In addition, the relatively salient central area of the first layer112as compared to the surrounding area facilitates smooth removal of the first shaving layer A1, whereby further improving the indicator function of the first shaving layer A1when indicating its unused condition. However, the present disclosure is not limited thereto, and the top exposure surface of the lubricating strip110may have a flat surface that does not include a curved surface. In this case, a contact area of the first layer112in contact with the skin may be increased. The cross-section of the lubricating strip110cut in the direction perpendicular to the height direction of the lubricating strip110in the second shaving layer A2may include at least some of the first layer112and at least some of the second layer114. Therefore, when the lubricating strip110is used within the second shaving layer A2, the exposed surface of the lubricating strip110may reveal both the first layer112having the first color and the second layer114having the second color. In this way, the user can recognize that the razor cartridge is in a used condition by checking the lubricating strip110that displays both the first color and the second color. In addition, in the second shaving layer A2, at least some of the boundary between the first layer112and the second layer114may be configured not to be parallel to the width direction of the lubricating strip110, whereby displaying shape-changing exposed surfaces of the lubricating strip110in response to increased degree of usage thereof. This allows the user to grasp the degree of usage of the razor cartridge by confirming the shape formed by the first and second colors. The third shaving layer A3having only the second layer114may display the second color alone, and the user may see the lubricating strip110by only the second color displayed to recognize that the relevant razor cartridge has been completely used and needs to be replaced with a new razor cartridge. As a result, the third shaving layer A3according to at least one embodiment of the present disclosure may serve as an indicator that informs the user of the complete use and the replacement time of the razor cartridge1. The lubricating strip110may have a support116which extends from the second layer114in the height direction of the lubricating strip110. Of the lubricating strip110, the support116may be an area inserted into and received in the blade housing120. Specifically, the support116may be inserted into a recess122located in the rear of the blade housing120as shown inFIG.2, and for this purpose, may include a hook1162. The hook1162may snap-fit with a protrusion124(inFIG.2) formed at one side of the recess122. The support116may be made of the same material as the second layer114. In this case, the second layer114and the support116may be integrally formed by extrusion. However, the present disclosure is not limited thereto, and the support116may be made of a material different from that of the second layer114. For example, the support116may include a higher proportion of water-insoluble polymer as compared to the second layer114to improve durability. InFIGS.1to4, the lubricating strip110is illustrated as including the support116which is inserted into the blade housing120so that the lubricating strip110is mounted to the razor cartridge1. However, the present disclosure is not limited thereto. For example, according to another embodiment, the lubricating strip110may not include the support116, in which case the lubricating strip110may be mounted to the razor cartridge1by way of attaching one side of the second layer114to one or more of the guard130and the cap140. FIG.5illustrates plan views of various states of a lubricating strip110for a razor cartridge caused by use of the lubricating strip according to at least one embodiment of the present disclosure. Specifically,FIG.5shows at (a) to (d) the exposed surface sections of the lubricating strip110when having vertical heights of H1to H4inFIG.3, respectively. As shown inFIG.5at (a), when the lubricating strip110has a vertical height of H1, that is, when the lubricating strip110is yet to be used, the exposed surface of the lubricating strip110may show the first layer112by the first color alone. In this case, the user can recognize that the razor cartridge is in an unused condition by checking the lubricating strip110displaying the first color alone. As shown inFIG.5at (b), when the lubricating strip110has a vertical height of H2, that is, when the first shaving layer A1is removed by the first use of the lubricating strip110, the lubricating strip100renders its exposed surface to firstly display the second color of the second layer114. At this time, the second color of the second layer114visible on the exposed surface of the lubricating strip110may have a shape of an elongated strip extending along the longitudinal direction of the lubricating strip110. The user can recognize that the razor cartridge has been used at least once by seeing the elongated strip of the second color displayed on the lubricating strip110. As shown inFIG.5at (c), when the lubricating strip110has a vertical height of H3, that is, when the lubricating strip110is used down to the mid-level, the exposed surface of the lubricating strip110may show a decreased ratio of the first color of the first layer112and an increased ratio of the second color of the second layer114compared with the state shown at (d). Since the boundary lines between the first layer112and the second layer114have a triangular profile that is not parallel to the width direction of the lubricating strip110, the elongated strip of the second color shown inFIG.5at (b) will increase widthwise in response to increased use of the lubricating strip110. The user can recognize that the razor cartridge has been used more compared to the state shown inFIG.5(b)by confirming that the width of the elongated strip of the second color has increased from that shown inFIG.5(b). As shown inFIG.5at (d), when the lubricating strip110has a vertical height of H4, that is, when the lubricating strip110has been used completely and the second shaving layer A2has been entirely removed, the exposed surface of the lubricating strip110may display only the second color of the second layer114. In this case, the user can recognize that the razor cartridge has been used completely and needs to be replaced with a new one by confirming the lubricating strip110displaying the second color only. The lubricating strip110according to at least one embodiment of the present disclosure can inform the user of the states of the lubricating strip110, i.e., an unused state, an in-use state, and a used-up state, sequentially. In the lubricating strip110according to at least one embodiment of the present disclosure, the boundary lines between the first layer112and the second layer114are configured to have a profile, at least a part of which is not parallel to the width direction of the lubricating strip110, whereby informing the user of the degree of usage of the razor cartridge1by displaying the second-color zone of the second shaving layer A2variously. In another embodiment of the present disclosure shown inFIG.6andFIG.7, unlike the above-illustrated embodiment of the present disclosure exemplified inFIGS.1to5, the boundary lines between the first layer and the second layer may include a plurality of protrusion profiles which will be described below. The following will focus on distinctive features according to another embodiment of the present disclosure, and repetitive description of features substantially the same as the first-mentioned embodiment will be omitted to avoid redundancy. FIG.6is a front view of a lubricating strip210for a razor cartridge according to another embodiment of the present disclosure. The lubricating strip210includes a support portion216and a hook2162. As shown inFIG.6, on the cross-section of the lubricating strip210cut in the direction perpendicular to the longitudinal direction of the lubricating strip210, the boundary lines between the first layer212and the second layer214may be defined by a plurality of protrusion profiles218A-218D. In the present specification, the protrusion profiles218A-218D refer to portions projecting toward the top of the lubricating strip210on the boundary lines between the first layer212and the second layer214. The first layer212and the second layer214include a first shaving layer A1, a second shaving layer A2, and a third shaving layer A3. The protrusion profiles218A-218D may each have a convex upward or a sharp point upward, but the present disclosure is not limited thereto. The vertical heights of the vertices or peaks of the respective protrusion profiles218A-218D may be configured to be different from each other. For example, the first to fourth protrusion profiles218A-218D sequentially arranged from the left side shown inFIG.6may have peaks different from each other in vertical height. Specifically, the peaks of the first to fourth protrusion profiles218A-218D are gradually decreased in vertical height from one side to another side, for example, from left to right. However, the present disclosure is not limited thereto, and according to another embodiment, the plurality of protrusion profiles218with peaks having different vertical heights may not be sequentially disposed in order of vertical heights of the peaks. FIG.7illustrates plan views of various states of a lubricating strip210for a razor cartridge caused by use of the lubricating strip according to at least one embodiment of the present disclosure. Specifically,FIG.7shows at (a) to (d) the exposed surface sections of the lubricating strip120having vertical heights of H21to H24inFIG.6, respectively. As shown inFIG.7at (a), where the lubricating strip210has a vertical height of H21, the first protrusion profile218A when forced to reveal its peak may display an elongated strip of the second color in place of the first protrusion profile218A. As shown inFIG.7at (b), where the lubricating strip210has a vertical height of H22, the second protrusion profile2188when forced to reveal its peak may display an elongated strip of the second color in place of the second protrusion profile218B. At this time, two elongated strips of the second color are visible from the lubricating strip210, and the elongated strip displayed at the position of the first protrusion profile218A has a greater width than that at (a) ofFIG.7. FIG.7at (c) and (d), similar to the first and second protrusion profiles218A,2188described above, shows that the third projection profile218C and the fourth projection profile218D display the elongated strips of the second color sequentially displayed in place thereof, resulting in an increased number of elongated strips of the second color. The earlier displayed elongated strips of the second color may continue to widen as the lubricating strip210is further used. The lubricating strip210according to another embodiment of the present disclosure features that the first layer212and the second layer214have such boundary lines as configured to include the plurality of protrusion profiles218having peaks of different vertical heights, whereby displaying a varying number of elongated strips of the second color in response to increased degree of usage of the lubricating strip210. Thus, the user can intuitively recognize the degree of usage of the razor cartridge by confirming the number of elongated strips of the second color displayed on the lubricating strip210. Profiles of the at least one boundary line between the first layer and the second layer according to the present disclosure are not limited to those shown inFIGS.1to7. Accordingly, any further profiles may be embodied by the present disclosure provided a first layer and a second layer at least partially form boundary lines that are not parallel to the width direction of the lubricating strip. In this regard,FIGS.8A to8Dillustrate various embodiments of the profile of the boundary lines between the first layer and the second layer. FIGS.8A to8Dare diagrams of lubricating strips for a razor cartridge according to further embodiments of the present disclosure. Each ofFIGS.8A to8Dshows a lubricating strip310,410,510, or610, respectively, including a support portion316,416,516, or616, respectively, and corresponding hook3162,4162,5162, or6162, respectively. Each of the first layer312,412,512, or612and the second layer314,414,514, or614shown inFIG.8A to8D, respectively, includes a first shaving layer A1, a second shaving layer A2, and a third shaving layer A3. As shown inFIG.8A, profiles of boundary lines between the first layer312and the second layer314may have an inverted triangle shape. In this case, protrusion profiles318may be provided on both sides of the lubricating strip310, so that elongated strips as revealed by the first use of the lubricating strip310display the second color on both sides of the lubricating strip310. As shown inFIG.8B, a lubricating strip410may include a first layer412and a second layer414jointly forming the profiles of a boundary line which has a concave down shape. In this case, similar toFIG.8A, protrusion profiles418may be disposed on both sides of the lubricating strip410, so that elongated strips as revealed by the first use of the lubricating strip410display the second color on both sides of the lubricating strip410. Since the profile of the boundary line between the first layer412and the second layer414has the concave down shape, the reduction of the area of the first layer412with the use of the lubricating strip410may be made slower when compared with the configuration in FIB.8B. As shown inFIG.8C, a lubricating strip510may include a first layer512and a second layer514jointly forming a profile of a boundary line that has a convex upward configuration. In this case, a protrusion profile518may be provided in the middle of the lubricating strip510so that an elongated strip as revealed by the first use of the lubricating strip510may be displayed centrally of the lubricating strip510by the second color. Since the profile of the boundary line between the first layer512and the second layer514has the convex upward configuration, the increase in the area of second layer514with the use of the lubricating strip510may be made faster when compared with the inverted triangle profile shown inFIG.3. As shown inFIG.8D, a lubricating strip610may include a first layer612and a second layer614jointly forming a profile of a boundary line which has a diagonal shape from the upper left to the lower right. In this case, a protrusion profile618may be disposed on the left side of the lubricating strip610such that an elongated strip as exposed by the first use of the lubricating strip610is displayed on the left side of the lubricating strip610by the second color. Since the profile of the boundary line between the first layer612and the second layer614has the diagonal shape, the increase of the second layer614due to the use of the lubricating strip610may be made by the second layer614progressively spreading its territory from left to right. FIGS.8A to8Dillustrate various embodiments of the profile of the boundary lines between the first layer and the second layer of the present disclosure, which, however, is not limited thereto, and the profiles of the boundary lines between the first layer and the second layer of the present disclosure may have various other shapes. As described above, according to at least one embodiment of the present disclosure, the lubricating strip for the razor cartridge provides various indicators indicating the usage of the razor cartridge through two layers having different colors, thus offering convenience in using the razor. Although exemplary embodiments of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, and substitutions are possible, without departing from the various characteristics of the disclosure. Therefore, exemplary embodiments of the present disclosure have been described for the sake of brevity and clarity. The scope of the technical idea of the present embodiments is not limited by the illustrations. Accordingly, one of ordinary skill would understand the scope of the disclosure is not limited by the above explicitly described embodiments but by the claims and equivalents thereof. | 35,922 |
11858157 | DETAIL DESCRIPTIONS OF THE INVENTION As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure. Accordingly, while embodiments are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present disclosure, and are made merely for the purposes of providing a full and enabling disclosure. The detailed disclosure herein of one or more embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection be defined by reading into any claim limitation found herein and/or issuing here from that does not explicitly appear in the claim itself. Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present disclosure. Accordingly, it is intended that the scope of patent protection is to be defined by the issued claim(s) rather than the description set forth herein. Additionally, it is important to note that each term used herein refers to that which an ordinary artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein—as understood by the ordinary artisan based on the contextual use of such term—differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the ordinary artisan should prevail. Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.” The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While many embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the claims found herein and/or issuing here from. The present disclosure contains headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header. The present disclosure includes many aspects and features. Moreover, while many aspects and features relate to, and are described in the context of a shaving implement, embodiments of the present disclosure are not limited to use only in this context. Overview The present disclosure describes a malleable razor implement with a bendable razor blade. Further, the malleable (or bendable) razor implement (or implement) with a bendable razor blade (or razor blade) may be mated to or inserted into the implement such that a user may bend the implement (or razor implement) at various angles in order to adjust the razor blade to different angles to fit the contours of one's face or beard or facial hair in various spots, particularly the hard-to-reach areas of a man's face under the nose, around the lips and the chin area or around the shape of one's facial hair to shape one's beard or mustache at precise angles/curves, etc. Further, the implement may be made of bendable material that holds its shape when bent. Further, the razor blade may be mated/molded/inserted inside the implement. The inserted razor blade is a thin commercially available disposable razor blade, which is thin enough to bend with and inside the implement, which will hold the razor blade in place as it is. Further, the razor blade may be reinforced by light glue or fixed by screws, or pressed by force to secure it in place. Further, the implement may be made of pliable and malleable material. Further, the razor blade may be bent inside the implement as the implement curvature and shape are manipulated and changed. Once the curve of the implement is bent in a new angle or curvature, the implement maintains that shape as does the razor blade inside it. Further, the razor blade may be bent to match or mirror these difficult angles of one's face that may conform to irregular parts of one's face so that the blade can reach those parts cleanly. It could also be bent to help shape and sculpt one's beard and other facial hair. Even just as a secondary razor blade, men may complete a shave without missing the hard-to-reach areas and also use the blade to better shape facial hair at tight angles. Further, the razor blade may allow men to hit these hard-to-reach areas more efficiently and thereby cut down on the number of strokes required to shave across one's face at awkward angles in the hard-to-reach areas. Face irritation and razor burn correlate with the number of strokes required for clean shaving an area of the face. Further, the implement may significantly cut down on the number of passes these sensitive and hard-to-reach areas require for a close shave and well-sculpted facial hair. Further, the malleable razor or bendable razor may be adjusted to the different angles of one's face in different parts of the face. Further, the razor implement may be fitted and bent by a shaver in real-time to fit different hard-to-reach spaces on one's face or to fit the curves of one's facial hair. Whereas conventional razors are straight, the disclosed malleable razor allows the shaver to curve the blade around the area under and around the nose, lips, and chin, and also around the mustache, goatee, mutton chops, etc. The result is a closer shave with less irritation and razor burn in sensitive areas of a man's face that are otherwise hard to shave closely or with precise angles. Further, the razor blade may be bent by the user to conform to the user's face. Further, the razor blade may be mated/molded/inserted inside the implement. Further, the razor blade may be reinforced by light glue or fixed with screws or pressed to ensure it stays in place. Further, the implement may be made of pliable and malleable material. Like a long and inexpensive spoon at its neck that can bend, the implement may be bent back and forth (concave to convex). Further, the razor blade inside the implement may be thin and therefore weak enough to bend with the razor device. Further, in some embodiments, the implement may be made up of malleable materials such as copper, silver, lead, aluminum, steel, tin, plastic, etc. Further, the implement (shaving implement) may be configured for allowing insertably removal of replaceable blades in a cutting portion of the implement. The cutting portion may include a strip (plate) made of a solid material such as aluminum that can bend, and then there would be a strip (plate) made up of a more flexible material such as a silicone and attached to the strip made of metal. This implement may be able to hold the blade securely while also allowing the cutting portion to be curved. The end portion may interface with a handle and could even have a removable section that holds in the flexible cutting portion for allowing replacing of the cutting portion. FIG.1is a perspective view of a shaving implement100, in accordance with some embodiments. Further, the shaving implement100may be in at least one bent configuration. Further, the shaving implement100may include a cutting portion102and an end portion104. Further, the cutting portion102may include a first end106and a second end108opposite to the first end106. Further, the cutting portion102may be configured to be transitioned between a straight configuration and the at least one bent configuration based on an application of at least one external force on the cutting portion102. Further, the at least external force may include a manual force applied by a hand of a user on the cutting portion102. Further, the cutting portion102transitions between the straight configuration and the at least one bent configuration based on the application of the at least one external force greater than a threshold force. Further, the cutting portion102does not transition between the straight configuration and the at least one bent configuration based on the application of the at least one external force less than the threshold force. Further, the threshold force may be based on at least one of an elasticity and a plasticity of a material of the cutting portion102. Further, transitioning the cutting portion102from the straight configuration to the at least one bent configuration may include forming at least one bend110in the cutting portion102along a length of the cutting portion102defined between the first end106and the second end108. Further, the length may be a longitudinal span of the cutting portion102from the first end106to the second end108. Further, the forming of the at least one bend110may include curving the cutting portion102. Further, the forming of the at least one bend110shapes the cutting portion102in at least one shape. Further, the at least one shape may include a C-shape, a U-shape, a W-shape, a S-shape, etc. Further, the at least one bend110may be characterized by at least one of a curvature and a direction of the at least one bend110. Further, the direction of may be a position of the at least one bend110in relation to a plane parallel to the cutting portion102in the straight configuration. Further, the direction may include an outward direction, an inward direction, etc. Further, transitioning the cutting portion102from the at least one bent configuration to the straight configuration may include removing the at least one bend110from the cutting portion102. Further, the removing of the at least one bend110from the cutting portion102may include straightening the cutting portion102. Further, the cutting portion102may be not transitionable between the straight configuration and the at least one bent configuration without the application of the at least one external force. Further, not transitioning the cutting portion102may include retaining the at least one shape of the cutting portion102after a removal of the at least one external force from the cutting portion102. Further, the cutting portion102may include at least one blade slot112configured for receiving at least one blade302, as shown inFIG.3, in the at least one blade slot112. Further, the at least one blade slot112may be configured for replaceably receiving the at least one blade302. Further, the cutting portion102may be configured for securing the at least one blade302in the at least one blade slot112. Further, a cutting edge304of each of the at least one blade302protrudes from a first side end306of the cutting portion102. Further, the first side end306may be adjacent to the first end106and the second end108. Further, the cutting portion102curves the at least one blade302in at least one blade shape for conforming the at least one blade302to the at least one bend110based on the forming of the at least one bend110. Further, the end portion104may be configured for removably holding the cutting portion102. Further, the second end108may be attached to the end portion104for the removably holding of the cutting portion102. Further, the cutting portion102may be removably inserted in the end portion104for the holding of the cutting portion in the end portion104 Further, in an embodiment, the at least one blade302may be comprised of at least one elastically deformable material. Further, the at least one elastically deformable material may include steel, copper, aluminum, ceramic, Iron-based alloys, etc. Further, in an embodiment, the at least one blade302may be comprised of at least one plastically deformable material. Further, the at least one plastically deformable material may include nitinol (an alloy of 49%-51% of nickel and titanium), cermet, polyaramid fiber (Kevlar), glass fiber, carbon fiber, etc. Further, the at least one blade302curved in the at least one blade shape reaches every part of a face of the user. Further, the at least one blade302curved in the at least one blade shape sculpts a natural curve corresponding to the at least one blade shape around a mustache or a goatee beard of the user. Further, the at least one blade302curved in the at least one blade shape shaves hair of a face of the user, a head of the user, etc., in at least one shape corresponding to the at least one blade shape. Further, the at least one blade shape of the at least one blade302may be manipulated by the user based on the application of the at least one external force on the cutting portion102to transition the cutting portion104to one of the at least one bent configuration which includes the forming of the at least one bend110in the cutting portion102. Further, in some embodiments, the cutting portion102may be comprised of at least one plastically deformable material. Further, the at least one plastically deformable material allows transitioning of the cutting portion102between the straight configuration and the at least one bent configuration based on the application of the at least one external force. Further, the at least one plastically deformable material does not allow the transitioning of the cutting portion102between the straight configuration and the at least one bent configuration without the application of the at least one external force. Further, the at least one plastically deformable material may include nitinol (an alloy of 49%-51% of nickel and titanium), cermet, polyaramid fiber (Kevlar), glass fiber, carbon fiber, etc. Further, in some embodiments, the cutting portion102may be comprised of at least one material. Further, the at least one material may be pliable and malleable. Further, the at least one material allows transitioning of the cutting portion102between the straight configuration and the at least one bent configuration based on the application of the at least one external force. Further, in some embodiments, the cutting portion102may be configured for removably securing the at least one blade302in the at least one blade slot112. Further, in some embodiments, the cutting portion102may be configured for removably securing the at least one blade302in the at least one blade slot112using at least one securing element202-204, as shown inFIG.2. Further, the at least one securing element202-204may include screws, bolts and nuts, permanent magnets, etc. Further, in some embodiments, the cutting portion102may be configured for receiving the at least one blade302in at least one of a plurality of locations of the at least one blade slot112along a length of the at least one blade slot112. Further, in an embodiment, the cutting portion102may be configured for movably receiving the at least one blade302in the at least one blade slot112. Further, the at least one blade302moves between the plurality of locations along the length of the at least one blade slot112cutting portion102in the straight configuration based on the movably receiving. Further, in some embodiments, the at least one blade slot112may include a plurality of blade slots spacedly disposed on the cutting portion102. Further, the at least one blade302may include a plurality of blades. Further, the plurality of blade slots receives the plurality of blades. Further, in some embodiments, the at least one bend110may include a plurality of bends. Further, the forming of the at least one bend110in the cutting portion102along the cutting portion102may include forming the plurality of bends in the cutting portion102along the cutting portion102. Further, in some embodiments, the forming of the at least one bend110in the cutting portion102along the cutting portion102may include forming the at least one bend110in at least one location of the cutting portion102along the cutting portion102. Further, in an embodiment, the cutting portion102may include at least one marking402, as shown inFIG.4, on a surface404of the cutting portion102. Further, the at least one marking404marks the at least one location. Further, the at least one marking402indicates a plurality of segments of the cutting portion102along the length. Further, in some embodiments, the at least one blade302may be characterized by a blade length. Further, the length of the at least one blade302may be less than a length of the at least one blade slot112. Further, in some embodiments, the at least one blade302may be characterized by a blade length. Further, the length of the at least one blade302may be greater than a length of the at least one blade slot112. Further, in some embodiments, the at least one blade302may be characterized by a blade length. Further, the length of the at least one blade302may be equal to a length of the at least one blade slot112. In further embodiments, the shaving implement100may include a handle502between a first handle end504and a second handle end506, as shown inFIG.5. Further, the end portion104may be configured to be coupled with the handle502at the second handle end506for attaching the handle502to the cutting portion102. Further, the user grips the handle502with a hand of the user for using the shaving implement100. Further, the handle502may be pivotally coupled with the end portion104. Further, in some embodiments, the cutting portion102may include an elongated first plate602and an elongated second plate604, as shown inFIG.6. Further, the elongated first plate602may include at least one groove606on a first surface608of the elongated first plate602. Further, the elongated second plate604may be attached to the first surface608of the elongated first plate602over the at least one groove606for forming the at least one blade slot112. Further, the at least one groove606may be blade shaped. Further, in an embodiment, the elongated first plate602may include a plastically deformable material and the elongated second plate604may include a flexible material. Further, the plastically deformable material may include metals such as aluminum, copper, silver, etc. Further, the flexible material may include silicone, elastomer, etc. Further, in an embodiment, the elongated second plate604may be comprised of a flexible material. Further, the flexible material may include polytetrafluoroethylene, acrylic or polymethyl methacrylate (PMMA), etc. Further, in an embodiment, the elongated second plate604interfaces with the at least one blade302present in the at least one blade slot112. Further, the elongated second plate604frictionally resists at least one movement of the at least one blade302in relation to the at least one blade slot112based on the interfacing of the elongated second plate604with the at least one blade302for the securing of the at least one blade302in the at least one blade slot112. FIG.2is a perspective view of the shaving implement100, in accordance with some embodiments. FIG.3is a perspective view of the shaving implement100with the at least one blade302, in accordance with some embodiments. FIG.4is a perspective view of the shaving implement100, in accordance with some embodiments. FIG.5is a perspective view of the shaving implement100with the handle502, in accordance with some embodiments. FIG.6is a perspective view of the shaving implement100, in accordance with some embodiments. FIG.7is a perspective partial view of the shaving implement100, in accordance with some embodiments. Further, the shaving implement100may be in the straight configuration. FIG.8is a top view of the shaving implement100, in accordance with some embodiments. FIG.9is a perspective view of a shaving implement900, in accordance with some embodiments. Accordingly, the shaving implement900may include a cutting portion902, an end portion904, and a handle910. Further, the cutting portion902may include a first end906and a second end908opposite to the first end906. Further, the cutting portion902may be configured to be transitioned between a straight configuration and at least one bent configuration based on an application of at least one external force on the cutting portion902. Further, transitioning the cutting portion902from the straight configuration to the at least one bent configuration may include forming at least one bend in the cutting portion902along a length of the cutting portion902defined between the first end906and the second end908. Further, the at least one bend may be characterized by at least one of a curvature and a direction of the at least one bend. Further, transitioning the cutting portion902from the at least one bent configuration to the straight configuration may include removing the at least one bend from the cutting portion902. Further, the cutting portion902may be not transitionable between the straight configuration and the at least one bent configuration without the application of the at least one external force. Further, the cutting portion902may include at least one blade slot912configured for receiving at least one blade1002, as shown inFIG.10, in the at least one blade slot912. Further, the cutting portion902may be configured for securing the at least one blade1002in the at least one blade slot912. Further, a cutting edge1004of each of the at least one blade1002protrudes from a first side end1006of the cutting portion902. Further, the first side end1006may be adjacent to the first end906and the second end908. Further, the cutting portion902curves the at least one blade1002in at least one blade shape for conforming the at least one blade1002to the at least one bend based on the forming of the at least one bend. Further, the end portion904may be configured for removably holding the cutting portion902. Further, the second end908may be attached to the end portion904for the removably holding of the cutting portion902. Further, the handle910may be extending between a first handle end914and a second handle end916. Further, the end portion904may be configured to be coupled with the handle910at the second handle end916for attaching the handle910to the cutting portion902. Further, in some embodiments, the cutting portion902may be comprised of at least one plastically deformable material. Further, the at least one plastically deformable material allows transitioning of the cutting portion902between the straight configuration and the at least one bent configuration based on the application of the at least one external force. Further, the at least one plastically deformable material does not allow the transitioning of the cutting portion902between the straight configuration and the at least one bent configuration without the application of the at least one external force. Further, in some embodiments, the cutting portion902may be configured for removably securing the at least one blade1002in the at least one blade slot912. FIG.10is a perspective view of the shaving implement900with the at least one blade1002, in accordance with some embodiments. Although the present disclosure has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the disclosure. | 25,189 |
11858158 | DETAILED DESCRIPTION The present disclosure seeks to provide a razor blade coating with a hard coating layer as a thin coating layer in which chromium boride, which is a nanocrystalline structure having high hardness, is dispersed in an amorphous mixture of chromium and boron, thereby improving the hardness and strength, i.e., the durability of the thin coating layer. Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings. In the following description, like reference numerals would rather designate like elements, although the elements are shown in different drawings. Further, in the following description of the at least one embodiment, a detailed description of known functions and configurations incorporated herein will be omitted for the purpose of clarity and for brevity. Additionally, various terms such as first, second, A, B, (a), (b), etc., are used solely for the purpose of differentiating one component from the other but not to imply or suggest the substances, the order or sequence of the components. Throughout this specification, when a part “includes” or “comprises” a component, the part is meant to further include other components, not excluding thereof unless there is a particular description contrary thereto. In addition, the terms such as “unit”, “module”, and the like refer to units for processing at least one function or operation, which may be implemented by hardware, software, or a combination thereof. Further, the description that the composition ratio of A to B is large or small means that the value of A/B is large or small. According to at least one embodiment of the present disclosure, physical vapor deposition (PVD) is used for coating a hard coating layer120. The physical vapor deposition may be any one of methods including a Direct Current (DC) sputtering, DC magnetron sputtering, DC unbalanced magnetron sputtering, pulse DC unbalanced magnetron sputtering, radio frequency (RF) sputtering, arc ion plating, electron-beam evaporation, ion-beam deposition, or ion-beam assisted deposition. FIG.1is a partial cross-sectional view of a blade edge for a razor and coating layers on the blade edge, according to at least one embodiment of the present disclosure. As shown inFIG.1, a razor blade10according to at least one embodiment of the present disclosure includes a razor blade substrate110, a hard thin film layer or hard coating layer120, and a polymer coating130. In at least one embodiment, the hard coating layer120is a single layer containing chromium (Cr) and boron (B) on the blade substrate110. Here, the term ‘single-layer’ means that the distinction between regions within the single layer is indefinite. Further, the single-layer may also encompass a layer configured to have different composition ratios depending on the position in the thickness direction thereof. The single-layer may be superior in durability compared to the multi-layer thin film. An initial fracture generally starting at the interlayer boundary under repeated impact loads is the main cause of reduced durability, and thus, the single-layer may outlast the multi-layer thin film. In particular, the hard coating layer120, according to at least one embodiment, is formed such that nanocrystallines122of high-hardness chromium borides is dispersed in amorphous124. FIG.2is a conceptual diagram of a first type of single composite target according to at least one embodiment of the present disclosure. As shown inFIG.2, a sputtering target used for physical vapor deposition is configured to have combined multiple regions composed of dissimilar materials. A single composite target20is a combination of dissimilar materials including at least one first material210and at least one second material220alternately arranged in a mosaic form, to be used as a single target. The deposition ratio of the first material210to the second material220on the substrate110may be controlled by adjusting the area ratio in the single composite target20between the first material210and the second material220. According to at least one embodiment of the present disclosure, the first material210used in the single composite target20of the first type is chrome (Cr) and the second material220used therein is boron (B). FIG.3is a conceptual diagram of a configuration of a vacuum chamber for depositing a hard coating layer according to at least one embodiment of the present disclosure. As shown inFIG.3, a sputtering apparatus30includes an aggregate310and a vacuum chamber320enclosing the aggregate310of multiple arranged elements of sputtering targets that are multiples of the single target20and razor blade substrates110to be coated. The sputtering apparatus30is internally formed with a high vacuum of about 10−6torr, an atmosphere by an injection gas (in at least one embodiment, argon or Ar gas), and a plasma350. With argon gas injected and direct current (DC) power applied, argon gas is plasma-excited, and argon ions are generated. The generated argon ions are accelerated toward the single composite target20by a DC power condition at the negative electrode as applied to the target side, until they collide with the target surface, causing neutral target atoms to be drawn out. The razor blade substrates110are formed by using a material such as stainless steel, undergo a heat treatment process to increase the hardness, and are polished to form a razor blade edge, and then simultaneously deposited with particles of dissimilar materials discharged from the single composite target20as shown inFIG.2to form the hard coating layer120. The razor blade substrate110may be subjected to a surface cleaning treatment with argon plasma before the deposition to remove residual foreign matter and oxide films. In addition, before performing a series of deposition operations on the blade aggregate310and before it is transported to face the single composite target20, the blade aggregate310may undergo pre-sputtering in the argon atmosphere for about 5 to 20 seconds for cleaning the single composite target20. Of the blade aggregate310, the blade areas to be coated and the sputtering target may be disposed to face each other. The instant embodiment illustrates a case where the blade aggregate310is transferred with respect to a fixed sputtering target, although the reverse is also envisioned. The razor blade aggregate310and/or the single composite target20may include a bias voltage forming mechanism (not shown in drawings) and/or a heating mechanism (not shown in drawings) required for sputtering. According to at least one embodiment of the present disclosure, the single composite target20includes Cr and B and deposition is performed with an atomic ratio of Cr to B, ranging from 9:1 to 4:6. Preferably, the atomic ratio of Cr to B may be 6:4. In this case, the power density for deposition may be in the range of 1 to 12 W/cm2and may correspond to a level of 1 to 10 kW. The blade substrate110may be subject to a bias of −50 to −750 V, a temperature of 0 to 200° C., and a DC power density of 1 to 12 W/cm2. Preferably, the blade substrate110may be subject to a temperature of 15 to 75° C., a bias of −200 to −600 V, and a DC power density of 4 to 8 W/cm2. This is a sputtering condition derived by considering the characteristic sputtering ratios of Cr and B and these are formed as a single composite target20. For reference, when Cr is incident on the substrate110with collisional energy of 250 to 10,000 eV and when B is incident on the same with collisional energy of 1,000 to 10,000 eV, the sputtering rate is high, based on which the single composite target20may be set to be within a range where they obtain collisional energy of 1,000 to 10,000 eV. When the ion energy of the particles incident on the blade substrate110is at a certain level, for example, 1,000 eV or less for B and 250 eV or less for Cr, or less, which corresponds to a knock-on condition, the particles may eventually bounce, and deposition may not be done well. On the contrary, collisional energy of 100,000 eV or more will not land the particles for deposition on the surface, which, instead, thrust deep into the substrate110. The described sputtering conditions are selected in consideration of the sputtering apparatus of at least one embodiment so that the particles are accelerated with the ion energy in the medium range of both extremes, for allowing cascade sputtering to occur mostly and thus the ion beam mixing effect which improves the bonding force between the surface of the blade substrate110and the coating materials toward the desirable coating process. In the above-described conditions, the hard coating layer120is distinctively formed to have a thickness of at least 10 nm and to be up to 1,000 nm thick. In addition, the hard coating layer120features the nanocrystallines122having a particle diameter of 3 to 100 nm as being dispersed in the amorphous124. In at least one embodiment, the nanocrystalline122may include various types of crystal structures in which Cr and B are crystallographically combined, such as CrB, CrB2, Cr2B, and may also include Cr crystals, while the amorphous124may be a mixture of Cr and B. In addition, the size of crystals formed in the hard coating layer120may be appropriately controlled by appropriately adjusting the collision energy of the particles that collide with the blade substrate110. The amorphous124structure, according to at least one embodiment, is arranged to surround the nanocrystalline122structures and thereby serves to disperse and absorb stress applied to the high-hardness nanocrystalline122structure in which Cr and B are crystallographically combined. In other words, the nanocrystalline122structures according to at least one embodiment may contribute to securing the hardness of the hard coating layer120, and the amorphous124structure including Cr and B may surround and support the nanocrystalline122structures to disperse an impact load, thereby securing the strength and durability of the hard coating layer120. In addition, Cr in the amorphous124structure may contribute to securing the adhesion between the hard coating layer120and the substrate stainless steel. On the other hand, B has a weak affinity with Fe, the main component of the blade substrate110, and it has a higher affinity with Cr than with Fe. In physical vapor deposition, B may be crystallographically bound to Cr or dispersed within the amorphous124. In general, when the size of the formed crystal is large, the surface hardness may be further increased, but the brittleness may increase, and durability may be deteriorated due to damage from an external impact. The sputtering conditions are preferably controlled such that crystals of appropriately small sizes, which are on the order of several to tens of nanometers in diameter, are evenly distributed. For example, when the energy of the particles incident on the deposition surface is large, it may exhibit the effect of splitting the crystal nuclei of the deposition surface or splitting the grown crystal, thereby suppressing the increase in the size of the nanocrystalline122structures in the hard coating layer120. Meanwhile, an ion gun may be additionally installed on the sputtering apparatus according to one embodiment, and a thin-film deposition process may be performed using the sputtering apparatus and the arc ion plating method together. FIG.4is conceptual diagrams of a second type of single composite target and a third type of single composite target according to some embodiments of the present disclosure. As shown inFIG.4, the second and third types of the single composite targets21and22are each formed by three types of target materials combined. A first material210is Cr, a second material220is B, and a third material230is one in which materials of Cr and B are crystallographically combined in a certain arrangement. The first, second, and third materials210,220, and230arranged in the orders shown inFIG.4at (a) and (b) are merely illustrative but not restrictive examples, and they may be arranged in different orders or at different area ratios. The third material230may be a composite of Cr and B that are crystallographically combined in the form of CrxBysuch as CrB, CrB2, Cr2B, and CrB4among others, and Cr and B may be combined at various atomic ratios. When using a partial target composed of a material in which Cr and B are crystallographically combined, it is highly probable that the coating layer formed therefrom contain mainly the same partial target's crystal structures distributed therein, where a specific one of the crystal structures distributed in the coating layer may be induced to become the dominant ingredient therein. In at least one embodiment, the metallic material of the dissimilar materials is described as being Cr, but the present disclosure is not so limited, and envisions the metallic material as being any one of Cr, Ni, Ti, W, and Nb. In at least one embodiment, Cr is selected to be the metallic material in consideration of the thin-film adhesiveness with the stainless steel of the blade substrate110. Although not shown, a single composite target may be configured such that the second material220and the third material230are inserted into the first material210, wherein the area ratio between the dissimilar materials may be adjusted by adjusting the interval in the pattern at which the second and third materials are inserted or by adjusting the sizes of the pattern elements. The single composite targets20,21, and22may be configured in any manner in terms of form and arrangement as long as the targets20,21, and22can contain properly distributed dissimilar materials until they are granulated and drawn out therefrom to be sufficiently uniformly mixed for the blade substrate110subject to the deposition. The respective materials disposed inside the single composite targets may take various shapes such as circles, triangles, and squares, for example. Further, the rectangular shapes may be arranged in a mosaic pattern to be mechanically combined. Alternatively or additionally, a single material may form the entire single composite target with a plurality of holes formed therein for insertion and bonding of dissimilar materials. FIG.5is a transmission electron microscopy (TEM) photograph of a hard coating layer coated according to at least one embodiment of the present disclosure. FIG.6shows results of selected area electron diffraction (SAED) of nanocrystallines of a hard coating layer coated according to at least one embodiment of the present disclosure. As shown inFIG.6, deposition of the nanocrystallines122having cmcm-CrB structure and I4/mcm-CrB structure was confirmed, and according to Kvashnin et al. (Kvashnin, A. G., Oganov, A. R., Samtsevich, A. I. & Allahyari, Z. (2017). Computational search for novel hard chromium-based materials,Journal of Physical Chemistry,8(4), 755-764) and in theory at least, all of cmcm-Cr, I41/amd and I4/mcm exhibit very high hardness in terms of hardness of crystalline particle, and the CrB crystal as produced and measured by the embodiments of the present disclosure may be interpreted as achieving a sufficiently high hardness. On the other hand, although not shown in drawings, the hard coating layer120according to some embodiments of the present disclosure may have a configuration in which the average particle diameter of the nanocrystalline122or the ratio of the nanocrystalline122to the amorphous124is variable in the thickness direction in the hard coating layer120. For example, the hard coating layer120may be configured to define a low ratio of the nanocrystallines122to the amorphous124, that is, a high ratio of the amorphous124, close to the inner side of the hard coating layer120in contact with the blade substrate110, and to define a high ratio of the nanocrystallines122to the amorphous124, that is, a high ratio of the nanocrystallines122, close to the outer side of the hard coating layer120. The configuration with the composition ratios being variable in the thickness direction in the hard coating layer120may be implemented by providing variations in the area ratio of the dissimilar materials in at least one of the single composite targets20,21, and22in a direction in which the blade substrate110is transferred during the physical vapor deposition. Further, in the sequential and continuous deposition process performed on the razor blade substrate110with at least one of the single composite targets20,21, and22, variations in the area ratio of the dissimilar materials in the single composite targets20,21,22cause particles to be drawn out therefrom at various composition ratios such that different composition ratios of the particles are deposited on the razor blade110in the early and late deposition stages. In addition, although the hard coating layer120according to at least one embodiment features a single layer deposited in the form of the nanocrystalline122of Cr and B crystallographically combined and the amorphous material124into which Cr and B are mixed, the present disclosure does not exclude that a buffer layer or adhesion enhancing layer is incorporated between the hard coating layer120and the blade substrate110or that the Cr coating layer may be laminated as an interlayer between the hard coating layer120and the polymer coating130. The hard coating layer120according to at least one embodiment is a single layer that has expectable improvements in strength and durability, and it may be formed to have dissimilar materials that are gradually changed in their composition ratio in the thickness direction, and in particular, formed to have such advantageous composition ratio in the regions close to both side surfaces of the razor blade10such that adhesion with the material or the coating layer that comes into contact with both side surfaces is enhanced. The present disclosure provides an improvement to the razor blade coating by a physical vapor deposition method, by forming a hard coating layer as a thin coating layer in which chromium boride, which is nanocrystallines having high hardness, is dispersed in an amorphous mixture of chromium and boron, and thereby improving the strength and hardness of the thin coating layer and securing the bonding force by chromium in the amorphous mixture between the hard coating layer and the blade substrate. Although exemplary embodiments of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, and substitutions are possible, without departing from the various characteristics of the disclosure. Therefore, exemplary embodiments of the present disclosure have been described for the sake of brevity and clarity. Accordingly, one of ordinary skill would understand the scope of the disclosure is not limited by the above explicitly described embodiments but by the claims and equivalents thereof. | 19,061 |
11858159 | DETAILED DESCRIPTION OF EMBODIMENTS Technical solutions of this application will be described in detail below with reference to embodiments and accompanying drawings, where the same or similar reference signs indicate the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the accompanying drawings are exemplary, which are merely intended to explain this application and are not to limit the present disclosure. As used herein, the terms, such as “longitudinal”, “transverse”, “upper”, “lower”, “front”, “back”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inside”, and “outside”, are based on the orientation or positional relationships shown in the accompanying drawings, and are merely intended to facilitate and simplify the description of the present disclosure, rather than indicating or implying that the device or element referred to must have a particular orientation, or be constructed and operated in a particular orientation. Therefore, these terms should not be construed as limitations for this application. In addition, terms “first” and “second” may explicitly or implicitly indicate the presence of one or more of the features limited thereby. The terms “first” and “second” used herein are intended to distinguish these features without implying any order or priority. In the description of the present disclosure, unless otherwise stated, the term “plurality” means two or more. In the description of the present disclosure, unless otherwise specified, the terms “mounting”, “connection”, “joint” should be understood in a broad sense. For example, the term “connection” can be a fixed connection, removable connection, or integral connection; a mechanical connection or electrical connection; a direct connection or indirect connection through an intermediate medium, or an internal connection of two components. For one of ordinary skill in the art, the specific meaning of the above terms in the description can be understood in specific cases. A cutting stock method for a rectangular defective sheet will be described in this embodiment referring toFIGS.1-5. Provided is a cutting stock method for a rectangular defective sheet, which includes the following steps. (S1) Information of the rectangular defective sheet is acquired, where the information includes size information of the rectangular defective sheet, size information of a target blocks, and location information of a defect. (S2) A cutting position discrete set of the rectangular defective sheet is acquired. The rectangular defective sheet is cut once according to the cutting position discrete set to produce two first sub-sheets. If a first sub-sheet is non-defective, step (S3) is performed, otherwise, step (S4) is performed. (S3) The non-defective first sub-sheet is cut into a plurality of target blocks according to a cutting position discrete set of the non-defective first sub-sheet. (S4) The first defective first sub-sheet is cut into two second sub-sheets according to a cutting position discrete set of the defective first sub-sheets. If a second sub-sheet is non-defective, step (S3) is performed; and if the second sub-sheet is defective, then step (S4) is repeated. (S5) A sum of sizes of the plurality of target blocks is calculated. A cutting plan corresponding to the largest sum of sizes is selected as an optimal cutting solution. (S6) The rectangular defective sheet is cut according to the cutting position discrete set corresponding to the optimal cutting solution. Specifically, this application provides a cutting stock method aiming at layout problems of the rectangular defective sheet under constraints of guillotine. For a defective sheet or a non-defective sheet, two cutting position search algorithms are respectively adopted herein. Based on the size information of the target block and the location information of the defects, a cutting position discrete set is calculated without losing the optimal solution. The rectangular sheet is cut by using this discrete set, which can significantly reduce the search space of the layout process and thus enhance the layout efficiency in production. Specifically, in the layout method provided herein, the size information of the rectangular defective sheet and the target block, and the coordinate information of the defects are acquired. Then the cutting position discrete set in the rectangular defective sheet is calculated through the cutting position search algorithm. The rectangular defective sheet is cut according to the points in the discrete set. Each cutting will produce two sub-sheets, including a non-defective sheet and a defective sheet. The type of the sub-sheets is determined after each cutting. If the sub-sheet is non-defective, then the non-defective sub-sheet is cut into the target block as required, and the size of the target block is also calculated. If the sub-sheet is defective, the iterative search continues according to the cutting position discrete set of the defective sub-sheet for the next cutting, so as to gradually divide the defective sub-sheet into multiple non-defective sub-sheets and defective sub-sheets until the area of the defective sub-sheet is approximately equal to the defective region. The sizes of all the obtained target blocks are added up to obtain the sum of the sizes. All possible cutting positions are traversed, and the sum of the sizes of target blocks obtained by each cutting solution is calculated and stored. Different cutting solutions are traversed, the above steps for cutting and calculating and storing the sizes are repeated. Then the maximum sizes obtained by those cutting solutions are compared to find the cutting solution corresponding to the largest sum of sizes as the optimal cutting solution, which is then used for practical production. The cutting stock method provided herein is applied to solve the layout problems of two-dimensional (2D) rectangular sheets under constraints of “guillotine cutting”. The “guillotine cutting” refers to that during cutting, a single cutting action must be made from one side of the rectangular sheet to the opposite side thereof to divide the rectangular sheet into two separate small rectangular blocks, where the set of cutting positioning points is called the discrete set. In this case, the rectangular sheet is cut several times to form a plurality of rectangular sub-sheets, which are neither scrap nor goods and are called sub-sheets. Those sub-sheets are divided into P-sheets (non-defective sheets) and C-sheets (defective sheets), according to whether they contain defects or not. The position where the guillotine cutting is made on the sub-sheets is called the cutting position. The target block is the non-defective sheet cut from the sub-sheets. The size of the target block is set according to the requirements of the product. In some embodiments, in step (S3), the cutting position discrete set of the non-defective sheet is calculated by a first cutting position search algorithm through the following steps. The size information of target blocks and the size information of the rectangular defective sheet are acquired. The target blocks are subjected to linear combination according to length and width thereof to produce combined blocks varying in size. A bottom left vertex of the rectangular defective sheet is set as an origin of Cartesian coordinate system. A width combined point set within a dimensional boundary of the rectangular defective sheet is generated based on linear combinations of width of the target blocks with the origin as reference. A height combined point set within a dimensional boundary of the rectangular defective sheet is generated based on linear combinations of height of the target blocks with the origin as reference. For an x-axis discrete set of the rectangular defective sheet, each point r in the x-axis discrete set is subtracted from a width w of the rectangular defective sheet. A maximum point not greater than w−r is found from the width combined point set, and the maximum point is then added to the x-axis discrete set. Similar operations are performed on a y-axis discrete set of the rectangular defective sheet. The x-axis discrete set of the rectangular defective sheet is denoted as Rv(w) and the y-axis discrete set of the rectangular defective sheet is denoted as Rv(h), respectively expressed by: Rv(w)=z∈Nv(w)};andRv(h)=z∈Nv(h)};where=max{z❘z∈Nv(w)};=max{z❘z∈Nh(h)};Nv(w)={z❘z=∑i=1nαiwis,0≤z≤w,αi∈{0,1,2,…},i∈I};Nh(h)={z❘z=∑i=1nαiwis,0≤z≤h,αi∈{0,1,2,…},i∈I}; w and h represent a width and height of the rectangular defective sheet, respectively; wisand hisrepresent a width and height of a target block i, respectively; I represents a target block set; Nv(w) represents the width combined point set; Nh(h) represents the height combined point set;represents a maximum point not greater than n in Nv(w);represents a maximum point not greater than n in Nh(h); and α represents the number of combined target blocks, and α=1, 2, . . . ; and z represents a point in Nh(h). In some embodiments, in step (S4), the cutting position discrete set of the defective first sub-sheet is calculated by a second cutting position searching algorithm through the following steps. For an x-axis discrete set of the defective first sub-sheet, each point r in the point set is subtracted from a width w of the rectangular defective sheet and a left boundary of a defect j. A maximum point not larger than w−r and xjd−r is found from the length combined point set, and the maximum point is added to the x-axis discrete set. Similar operations are performed on a y-axis discrete set of the defective first sub-sheet considering a height h of the rectangular defective sheet and a lower boundary yjdof defect j. The x-axis discrete set of the defective first sub-sheet is donated as Rdv(w) and the y-axis discrete set of the defective first sub-sheet is donated as Rdh(h), respectively expressed by: Rdv(w)=z∈Ndv(w)};andRdh(h)=z∈Ndh(h)};whereNdv(w)=Nv(w)⋃{z❘z=xjd+wjd+v,j∈D,v∈Nv(w),0≤z≤w};Ndh(h)=Nv(h)⋃{z❘z=yjd+hjd+v,j∈D,v∈Nh(h),0≤z≤h};Nv(w)={z❘z=∑i=1nαiwis,0≤z≤w,αi∈{0,1,2,…},i∈I};Nh(h)={z❘z=∑i=1nαihis,0≤z≤h,αi∈{0,1,2,…},i∈I};=max{z❘z∈Nv(w)};=max{z❘z∈Nh(h)}; w and h represent a width and height of the rectangular defective sheet, respectively; wisand hisrepresent the width and height of the target block I, respectively; I represents the target block set; xjd, yjdare coordinates of a lower left corner of the defect j; wjdand hjdare a length and width of the defect j, respectively; and D is a defect set; Nv(w) represents the width combined point set; Nh(h) represents the height combined point set;represents a maximum point not greater than n in Nv(w);represents a maximum point not greater than n in Nh(h); and a represents the number of combined target blocks, and α=1, 2, . . . ; and z represents a point in Nh(h). Specifically, for the defective sheet, the influence of defects needs to be considered on the basis of the non-defective discrete points. When calculating the combined point, besides the discrete points generated with the bottom left vertex of the cut board as a reference, the x-axis discrete points generated with the right boundary of the defect as a reference and the y-axis discrete points generated with the right boundary of the defect as a reference should be considered. In some embodiments, in step (S3), the non-defective first sub-sheet is cut through the following steps. (S31) A cutting position discrete point set within one-half a width of the non-defective first sub-sheet is searched. (S32) A maximum size of the plurality of target blocks that can be cut from the non-defective first sub-sheet is calculated based on all the linear combinations of width and height of the target blocks. (S33) The non-defective first sub-sheet is cut according to a cutting position discrete set corresponding to the maximum size. Specifically, since the non-defective sheet is rectangular, symmetrical, and free of defective regions, the size of the target block can be read directly. Moreover, the cutting position discrete set within one-half the length of the target block can be searched iteratively according to the size of the target block, so as to obtain the cutting plan corresponding to the maximum size of the target block. The non-defective sheet can be divided into a plurality of target blocks according to the cutting positions discrete set, enhancing the searching efficiency. In some embodiments, the number of the defects on the rectangular defective sheet is D={1, 2, . . . , m}; coordinates of a lower left corner of each defect j∈D are (xjd, yjd); and a width and height of each defect are wjdand hjd, respectively; the rectangular defective sheet has a width of w and a height of h respectively, and I types of target blocks with a width of wis, a height of his, and a size of viare cut from the rectangular defective sheet a constraint of “guillotine cutting”, and I={1, 2, . . . , n}; the target blocks are not intersected with the defect, and are configured to have a fixed orientation and to be unable to rotate; and dimensions of the rectangular defective sheet, the target blocks and the defect are all integers. Specifically, the defective regions and target block areas are pre-defined herein to render easy marking for different defective regions and target blocks in the cutting position search algorithm, which facilitates the search and calculation of the algorithm. In some embodiments, in step (S5), the sum of sizes of the plurality of target blocks is calculated through the following steps. The non-defective sub-sheet S=(w, h) is cut into rectangular blocks to obtain the maximum sum of sizes from n solutions, expressed by: g(w,h)=max{{vi·⌊wwis⌋·⌊hhis⌋}❘wis≤w,his≤h,i=1,2,…n}; where wisand hisrepresent a length and width of the target block i, respectively; virepresents a size of the target block i; n represents a type of the target block; and w and h represent a length and width of a sub-sheet, respectively. In this embodiment, the information of the target block and the defective region is input in the algorithm and initialized with g0=0. The type of sub-boards is determined, and the cutting position discrete set is calculated for the defective and non-defective sheets, respectively. The cutting position discrete set at different locations are traversed and compared. Specifically, the optimal sizes of two sub-sheets g(ws1, hs1) and g(ws2, hs2) obtained after each cutting are calculated, and giis also calculated, where gi=g(ws1, hs1)+g(ws2, hs2); and ws1, hs1, ws2, and hs2are the position parameters of the target block. The above steps are traversed until the target size giis maximum to end the loop and output the maximum size and the corresponding cutting solution. The cutting solution is then applied to the actual cutting process. This application also provides a cutting stock system, which includes a memory, a processor, and a computer program stored on the memory. The computer program is configured to be executed by the processor. The processor is configured to execute the computer program to implement the cutting stock method. This application further provides a computer-readable storage medium. The computer program is stored on the computer-readable storage medium. The computer program is configured to be executed by a processor to implement the aforementioned cutting stock method. Other components and operations of the cutting stock method and system for the defective rectangular sheets provided herein are known to one of ordinary skill in the art and will not be described in detail here. The modules of the above-mentioned cutting stock system can be implemented as a whole or in part by means of software, hardware and combinations thereof. The modules may be embedded in hardware or in a processor independent of the electronic device, or stored in software in the memory of the electronic device so that the processor can call up and perform the operations corresponding to those modules. One of ordinary skill in the art can understand that the computer program can instruct the relevant hardware to implement all or part of the operations in the method of the above embodiments. The computer program may be stored in a non-volatile computer readable storage medium and may include aforementioned operations when executed. In addition, any references to memory, storage, databases or other media used herein may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration rather than limitation, RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM). It will be clear to those skilled in the art that, for convenient and brief description, the above-mentioned division of each functional unit and module is given as an example. In actual application, the above-mentioned functions can be assigned to be completed by different functional units and modules as needed, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the above-described functions. The above description of embodiments is merely intended to illustrate the technical routes and features of the present disclosure to enable those skilled in the art to understand and implement the technical solutions of the present disclosure. Nonetheless, the present disclosure is not limited to the embodiments described above. All variations or modifications made without departing from the spirit of the disclosure shall fall within the scope of the present disclosure defined by the appended claims. | 18,286 |
11858160 | DETAILED DESCRIPTION To facilitate a better understanding of the present disclosure, the following examples of certain embodiments are given. The following examples are not to be read to limit or define the scope of the disclosure. Embodiments of the present disclosure and its advantages are best understood by referring toFIGS.1through4, where like numbers are used to indicate like and corresponding parts. Described herein are various systems and methods for aligning a seal for cutting with an ultrasonic cutter tool. FIG.1illustrates an example cutting system100. The cutting system100may comprise an ultrasonic cutter tool105and a railing system110disposed on a platform115. While the example cutting system100, throughout this disclosure, may be disposed on the platform115, the cutting system100is not limited to being disposed on the platform115and may be disposed on any suitable surface. As illustrated, the ultrasonic tool105may be coupled to the railing system110and may be configured to translate along the length of the railing system110. The railing system110may be secured to the platform115through any suitable means (for example, using fasteners). The cutting system100may further comprise a power source120operable to provide power to the ultrasonic cutter tool105. The power source120may be electrically coupled to the ultrasonic cutter tool105. In embodiments, any suitable source of power may be used as the power source120. Without limitations, the power source120may be a generator, one or more batteries, and any combination thereof. The power source120may be disposed at any suitable location relative to the ultrasonic cutter tool105. While the example power source120may be illustrated as being disposed on the platform115, the power source120is not limited to such a location and may be disposed remote from the platform115. The cutting system100may further comprise an alignment mold125disposed on the platform in proximity to the railing system110. In embodiments, the alignment mold125may be disposed parallel to the railing system110and offset by a distance. This distance may be any suitable length. Without limitations, the alignment mold125may be disposed at a distance within a range of about 0.5 inches to about 10 inches. The alignment mold125may be operable to receive a seal130, wherein the seal130may be disposed partially on the alignment mold125. In embodiments, the ultrasonic cutter tool105may be operable to cut the seal130as the ultrasonic cutter tool105translates along the railing system110. As illustrated, the cutting system100may further comprise one or more roller bearing carriages135coupled to the railing system110. Each one of the one or more roller bearing carriages135may be at least partially disposed over the alignment mold125, and each one of the one or more roller bearing carriages135may be configured to translate along the alignment mold125as each of the one or more roller bearing carriages135translates along the railing system110. FIG.2illustrates an example ultrasonic cutter tool105of the cutting system100(referring toFIG.1) disposed on the platform115. The ultrasonic cutter tool105may be any suitable size, height, shape, and any combinations thereof. In embodiments, the ultrasonic cutter tool105may comprise any suitable materials, including, but not limited to, metals, nonmetals, polymers, ceramics, composites, and any combinations thereof. As illustrated, a mechanical adjuster200may be coupled to the ultrasonic cutter tool105. The mechanical adjuster200may be operable to translate the ultrasonic cutter tool105in relation to the railing system110for precise cut-path alignment and or offset. In embodiments, the translation of the ultrasonic cutter tool105is not limited by a certain distance. The mechanical adjuster200may be any suitable size, height, shape, and any combinations thereof. In embodiments, the mechanical adjuster200may comprise any suitable materials, including, but not limited to, metals, nonmetals, polymers, ceramics, composites, and any combinations thereof. The mechanical adjuster200may comprise one or more actuators205operable to translate the ultrasonic cutter tool105in relation to the railing system110once actuated. Each of the one or more actuators205may be operable to translate the ultrasonic cutter tool105along an axis. For example, one of the one or more actuators205may be actuated to translate the ultrasonic cutter tool105along an x-axis or z-axis in relation to the railing system110. In other embodiments, the one of the one or more actuators205may be actuated to translate the ultrasonic cutter tool105along a y-axis if the railing system110is configured to be perpendicular to such an axis. In embodiments, the one or more actuators205may be actuated to position a blade210of the ultrasonic cutter tool105above the alignment mold125. The blade210may be secured within the ultrasonic cutter tool105through any suitable means, including fasteners. The blade210may be any suitable size, height, shape, and any combinations thereof. For example, the blade210may comprise a triangular shape. The blade210may be operable to vibrate at a designated frequency based on the power provided by the power source120(referring toFIG.1). The ultrasonic cutter tool105may utilize the blade210, while vibrating, to cut the seal130(referring toFIG.1) along the alignment mold125. In one or more embodiments, the ultrasonic cutter tool105may further comprise a cover215disposed in proximity to the blade210operable to prevent physical access to the blade210by an external structure. For example, the cover215may be any suitable size and/or shape operable to prevent an operator from disposing an object in a pathway of the blade210. As illustrated, an alignment component220may be disposed at a first end225of the platform115. The alignment component220may be operable to align the blade210to be parallel to the apex of the alignment mold125(for example, apex325inFIG.3). Any suitable fasteners may be utilized to couple the alignment component220to the platform115. While the alignment component220may be illustrated as being disposed at the first end225, the alignment component220is not limited to such a location. The alignment component220may be any suitable size, height, shape, and any combinations thereof. For example, the alignment component220may comprise rectangular or square shape. In embodiments, the alignment component220may comprise any suitable materials, including, but not limited to, metals, nonmetals, polymers, ceramics, composites, and any combinations thereof. During operations, an operator may utilize the alignment component220to verify that the blade210is aligned to be parallel to the alignment mold125. At an initial positioning, a side of the alignment component220may be parallel to the alignment mold125. If the ultrasonic cutter tool105is positioned so as to abut the blade210against the side of the alignment component220, and the blade210is not flush with the side of the alignment component220, the operator may adjust the blade210within the ultrasonic cutter tool105to be aligned. If the blade210is flush against the side of the alignment component220, the operator may continue to cut the seal130(referring toFIG.1) disposed at least partially over the alignment mold125. FIG.3illustrates an example seal130disposed on the alignment mold125. The seal130may be any suitable size, height, shape, and any combinations thereof. As disclosed herein, the seal130may comprise at least one apex or bend, but the seal130is not limited to a singular apex or bend. Without limitations, the seal130may comprise a Z-shaped profile, wherein the seal130comprises a first leg300, a diagonal section305, and a second leg310. In embodiments, the diagonal section305may be disposed between the first leg300and the second leg310. The first leg300may be disposed parallel to the second leg310and vertically offset from the second leg310. In one or more embodiments, the length of the first leg300may be shorter than the length of the second leg310. A first bend315may be disposed between the first leg300and the diagonal section305, and a second bend320may be disposed between the diagonal section305and the second leg310. As disclosed, the first bend315may comprise an angle between the first leg300and the diagonal section305, and the second bend320may comprise an angle between the diagonal section305and the second leg310. Without limitations, the angle for either the first bend315or the second bend320may be any suitable angle. The first bend315may comprise an equivalent angle or a different angle from the second bend320. In embodiments, the seal130may comprise any suitable materials, including, but not limited to, polymers, ceramics, composites, and any combinations thereof. As illustrated, the first bend315may be positioned over or aligned with an apex325of the alignment mold125, wherein the apex325may be the top point of the alignment mold125. The alignment mold125may be configured to allow for the apex325to accommodate the angle of the first bend315of one or more seals130. FIG.4illustrates an example one or more roller bearing carriages135with the seal130and alignment mold125. Each one of the one or more roller bearing carriages135may be any suitable size, height, shape, and any combinations thereof. In embodiments, each of the one or more roller bearing carriages135may comprise a set of roller bearings400, wherein each set of roller bearings400is disposed over a portion of the seal130. In embodiments, a set of roller bearings400may consist of two individual roller bearings400. As illustrated, the first bend315may be disposed over the apex325of the alignment mold125. The set of roller bearings400may be disposed so as to position the apex325, and subsequently the first bend315, in between a gap402disposed between each set of roller bearings400. By disposing each set of roller bearings400over and on top of the seal130, thereby aligning the gap402with the apex325, the one or more roller bearing carriages135may align the first bend315with the apex325. In embodiments, the set of roller bearings400may be operable to depress or apply a downward force against the seal130. During operations, each set of roller bearings400may be configured to translate along the alignment mold125as each of the one or more roller bearing carriages135translates along the railing system110(referring toFIG.1). As illustrated, the alignment mold125may comprise a body405, a first top side410, and a second top side415. The first top side410and the second top side415may be angled in relation to each other, thereby forming the apex325of the alignment mold125. The alignment mold125may be any suitable size, height, shape, and any combinations thereof. In embodiments, the alignment mold125may comprise any suitable materials, including, but not limited to, polymers, ceramics, composites, rubber, and any combinations thereof. In one or more embodiments, the alignment mold125may be at least partially inserted into the platform115, wherein the body405may be contained within the platform115, and the first top side410and the second top side415may extend from the body out and away from the platform to form the apex325. With reference toFIGS.1-4, a method as presented in the present disclosure may be described. An operator may dispose the seal130at least partially over the alignment mold125in order to align the first bend315with the apex325of the alignment mold125. The operator may then dispose the one or more roller bearing carriages135over at least a portion of the seal130by translating the one or more roller bearing carriages135along the railing system110, wherein this translation results in the sets of roller bearings400translating over the portion of the seal130. As sets of roller bearings400translate over the portion of the seal130, the first bend315may be aligned with the apex325. Once aligned, the operator may secure the position of the seal130relative to the alignment mold125. To verify alignment, the operator may visually determine or inspect that the first bend is aligned with the apex325by referring to a reference marking. If the first bend315is in the tolerance of the reference marking, the seal130may be secured relative to the alignment mold125. In embodiments, the operator may secure the seal130through any suitable methods, including, but not limited to, encapsulating the seal130with tape. The operator may then position the ultrasonic cutter tool105to cut the seal130. The operator may actuate the mechanical adjuster200to move the blade210along an x-axis, y-axis, z-axis, or any combinations thereof in order to position the blade210relative to the apex325. The blade210may be positioned offset from the apex325by a distance. The operator may then verify that the blade210is aligned with the alignment mold125via the alignment component220. If aligned, the ultrasonic cutter tool105may be actuated, by power provided by the power source120, to vibrate the blade210at a designated frequency. As the blade210vibrates, the operator may translate the ultrasonic cutter tool105along the railing system110to cut the seal130. In embodiments, as the ultrasonic cutter tool105translates, the one or more roller bearing carriages135may be displaced. The ultrasonic cutter tool105may cut the first leg300of the seal130with the blade210. The present disclosure may provide numerous advantages, such as the various technical advantages that have been described with respective to various embodiments and examples disclosed herein. Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions, and claims. Moreover, while specific advantages have been enumerated in this disclosure, various embodiments may include all, some, or none of the enumerated advantages. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages. | 15,873 |
11858161 | DETAILED DESCRIPTION OF THE EMBODIMENTS Reduced to its essential structure and with reference to the figures of the attached drawings, a cutting-off machine to which a cutting unit is applicable according to the present invention is of the type comprising:a structure (SC) on which the logs to be transversely cut are moved in order to obtain rolls of shorter length;a cutting unit (CU) arranged at a predetermined point of said structure (SC) and comprising a support plate (1) for a blade (2) which can be removably connected to a respective rotary actuator (20) arranged at one end of said plate (1) and able to control the rotation of the blade itself around its own axis (x-x) at a predetermined speed, said plate (1) being, in turn, constrained to a further actuator which drives it into rotation with a predetermined angular speed around an axis parallel to the axis of rotation (x-x) of the blade (2);a sharpening unit with grinding two wheels (3) adapted to sharpen the blade (2);a device for positioning said grinding wheels (3) with respect to the blade (2). FIG.1schematically shows the main elements of a cutting-off machine (CM) in which a cutting unit in accordance with the present invention can be mounted, it being understood that the drawing is provided solely to allow the location of the cutting unit to be identified with respect to the path of the logs. It is also understood that the cutting-off machine can be made in any suitable way for executing the transversal cutting of logs of paper material, to obtain rolls of shorter length, by means of a cutting unit comprising a blade which acts transversely to the logs themselves. In the example ofFIG.1, in accordance with a per se known construction scheme, the actuator (20) is connected to the blade (2) by means of a belt (21) which connects the central pin (22) of the same blade to the shaft (23) of the actuator (20) through a pulley arranged on the free end of the shaft (23). Furthermore, the plate (1) is rotated around an axis parallel to the axis of rotation of the blade (2) by means of a corresponding rotary actuator (A1) having a shaft (B1) parallel to the shaft (23) of the actuator that controls the rotation of the blade (2). The actuator (20), which for example is an electric motor, is integral with a box-shaped body (BB) located above said structure (SC) and inside which the belt (21) and the shafts (23) and (B1) are arranged. Said body (BB) is connected with a corresponding actuator (BA) which, through a screw (VA) acting on a nut bushing arranged on an upper side of the same body (BB), controls its vertical position, i.e. its positioning with respect to the underlying structure (SC). Consequently, by controlling the position of the body (BB), the blade (2) can be positioned at the desired height. The actuator (A1), which for example consists of an electric motor, is also integral with the body (BB). In practice, the blade (2) rotates around a respective axis (x-x) which is parallel to the axis of rotation of the plate (1). A cutting unit (CU) according to a possible embodiment of the present invention comprises a plate (1) with an upper side (10), a lower side (11), a front side (F1) and a rear side (R1). The central pin (22) of the circular blade (2) is mounted on the lower side (11) of the plate (1) and is applied in a removable way on said pin in order to allow its replacement when necessary. The blade (2) is oriented parallel to the plate (1) and is positioned at a predetermined distance from the front side (F1) of the latter. On the plate (1) there are also mounted two grinding wheels (3) for sharpening the blade (2) and a device for positioning the grinding wheels (3) with respect to the blade (2). Each grinding wheel (3) is applied on a respective support shaft (30) whose axis (A30) has a predetermined inclination with respect to the front side (F1) of the plate (1) and, consequently, with respect to a corresponding side of the blade (2).FIG.8schematically shows the spindle (30) supporting a grinding wheel (3), the respective axis (A30), the inclination of the grinding wheel (3) in the sharpening position with respect to a side (A2) of the blade (2) and the lying plane (P2) of the latter. In accordance with the present invention, the aforementioned grinding wheel positioning device (3) comprises:a primary carriage (4) movable parallel to the plate (1) according to a primary movement direction (PD);two secondary carriages (42,43) connected to the primary carriage (4) and individually movable according to a secondary movement direction (SD) orthogonal to said primary movement direction (PD), each secondary carriage (42,43) having a seat for supporting the shaft (30) of a corresponding grinding wheel (3). In practice, the primary movement direction (PD) is a direction parallel to the plane (P2) where the blade (2) lies, i.e. a radial direction with respect to the latter, while the secondary movement direction (SD) is a direction parallel to the axis (x-x) of rotation of the blade (2). The primary carriage (4) can consist of two independent units (40,41) to each of which a corresponding secondary carriage (42,43) is connected. Alternatively, the primary carriage can consist of a single unit (400) to which both the secondary carriages (42,43) are connected. With reference to the example shown inFIGS.2-7, the primary carriage (4) consists of two independent units, each of which consists of a body (40,41) constrained to the internal side (F1) of the plate (1) by means of a rectilinear guide (LG) which allows its guided sliding along the primary movement direction (PD). The sliding of each body (40,41) along the primary movement direction (PD) is controlled by a corresponding electric motor (M0, M1). Each motor (M0, M1) is fixed on the internal side (F1) of the plate (1) and drives a threaded shaft (TS) which engages a corresponding nut bushing (MV) formed on each body (40,41). Therefore, each body (40,41) can be moved independently from the other body by the respective motor (M0, M1) along the primary movement direction (PD). Each of said bodies (40,41) has a first side (4P) parallel to the internal side (F1) of the plate (1) and a second side (4H) orthogonal and below the first side (4P). The first side (4P) slides along the respective guide (LG). The second side (4H) constitutes a bracket structure whose function is indicated below. In practice, each of said bodies (40,41), seen laterally, has a structure with a part (4P) parallel to the internal side (F1) of the plate (1) and a part (4H) orthogonal to the same internal side (F1) of the plate (1) and oriented towards the outside (E) so as to define a shelf above the blade (2). In the example described above, the movement of the bodies (40,41), i.e. the movement of the two units that make up the primary carriage (4), is a guided movement thanks to the presence of the guides (LG) that constrain the bodies (40,41) to the internal side (F1) of the plate (1). The references “PT” denote two sliding blocks arranged at a predetermined distance from each other on the side (4P) of each body (40,41) and intended to slide on said guides (LG). Each of the secondary carriages (42,43) has a first arm (PA) parallel to the bracket (4H) of the respective primary carriage, to which it is connected by means of a corresponding slide guide (G2, G3), and a second arm (SA) which is orthogonal to the first arm (PA) and, at its free end, supports the shaft (30) of a respective grinding wheel (3). The second arm (SA) passes through an opening (BL) of the bracket (4H), so that the grinding wheel (3) with its shaft (30) are below the bracket (4H) and the second arm (SA) is free to move in the opening (BL) according to the secondary movement direction (SD). A connecting rod (B2, B3) is connected to the first arm (PA) of each secondary carriage (42,43) and is connected to a corresponding electric motor (M2, M3). Each motor (M2, M3) is supported by a surface (SM) which each primary carriage (40,41) has at a predetermined distance from its side (4P) parallel to the plate (1). Each connecting rod is connected to the first arm (PA) by means of a pin (PN) orthogonal both to the connecting rod and to the first arm. Therefore, each motor (M2, M3) can move the respective secondary carriage (42,43) according to the secondary movement direction (SD). This movement is a guided movement since each secondary carriage is connected to the primary carriage by means of a respective slide (G2, G3) which, in fact, is oriented according to the secondary direction (SD). Therefore, each grinding wheel (3) is supported by the cutting unit (CU) in such a way that it can be moved both according to the primary movement direction (PD) and the secondary movement direction (SD). In fact, the bodies (40,41) that make up the primary carriage (4) can be moved in the primary movement direction (PD) by means of the motors (M0, M1), while the secondary carriages (42,43) can be moved on the primary carriage along the secondary movement direction (SD) by the motors (M2, M3). The grinding wheels (3) are oriented with their respective grinding surfaces (31) towards the plane (P2) where the blade (2) lies. The primary carriage is provided, in correspondence with its lower side, i.e. the side facing the blade (2), with an optical sensor (100) whose function is described below. For example, the optical sensor (100) is mounted below the bracket (4H) of any of the bodies (40,41) previously described. The optical axis (101) of the sensor (100) is spaced by a predetermined value (b) from a reference line, which can be the so-called “dipping line” (3L) of the grinding wheels (3), so as to intercept the cutting edge (200) of the blade (2), when the primary carriage approaches the latter, before the grinding wheels (3) are arranged in the sharpening position on the blade. The dipping line is a reference line of each grinding wheel (3), i.e. a known geometric parameter supplied by the manufacturer. This parameter identifies the correct position of the grinding wheel with respect to the blade for sharpening purposes. In practice, for the correct sharpening of the blade, the dipping line of the grinding wheel must be in a position of tangency to the cutting edge of the blade, as shown in the diagram inFIG.14. In this condition, the abrasive part of the grinding wheel interferes correctly with the area of the blade to be sharpened, i.e. an optimal contact condition is achieved between the grinding wheel and the blade during the sharpening phase. A possible operating mode of the device described above is the following. When a new blade is mounted on the cutting unit (CU), the primary carriage is moved along the primary movement direction (PD). Then, the optical sensor (100) detects the edge (200) of the blade (2), and the run of the primary carriage continues until it stops when the optical axis (101) has passed the said edge (200) of a value corresponding to the value (b) previously described. For this purpose, the optical sensor (100) is connected to the motors (M0, M1). In this way, the grinding wheels (3) are correctly positioned with respect to the two sides of the blade (2) for the subsequent sharpening phase. At this point, the secondary carriages (42,43) are moved along the secondary movement direction (SD) by the motors (M2, M3) so that each grinding wheel (3) is brought with the respective surface (31) in contact with the corresponding side of the blade (2) which rotates around its own axis (x-x). This contact is detected through the same blade (2) which, in fact, undergoes a slowdown as a consequence of the contact itself. Normally the motor (20) that drives the blade is controlled by a system equipped with a control function that ensures a constant rotation speed of the blade around the rotation axis (x-x) during the transversal cutting of the logs. When the grinding wheel positioning device is in operation, whereby the grinding wheels are moved along the secondary movement direction (SD) as previously mentioned, the aforementioned motor (20) control function is temporarily deactivated. The contact of the wheels (3) with the blade (2) causes a slowdown of the latter and this condition is assumed as an indicator of the contact between the wheels and the blade. When this occurs, the run of the secondary carriages in the direction (SD) is stopped. Therefore, the grinding wheels (3) will always be correctly positioned on the blade (2) regardless of the state of wear, and therefore regardless of the actual diameter, of the blade itself. Since the run of the primary carriage towards the blade (2) is controlled by the optical sensor (100) which detects the cutting edge (200) of the blade, the stopping point of the primary carriage at the end of this run is not predefined but it depends on the diameter, and therefore on the degree of wear, of the blade mounted in the cutting unit. In practice, in a first phase of positioning the grinding wheels (3), the actuators (M0, M1) that move the units (40,41) of the primary carriage are controlled by an optical sensor (100) which is connected to the same primary carriage and detects the cutting edge (200) of the blade (2) and interrupts the run of the primary carriage along the primary movement direction (PD) after this detection, so that the run of the primary carriage is given by the length of a path comprised between the initial waiting position and a position of detection of the cutting edge (200) by the optical sensor (100) increased by a predetermined value (b). And, in a second step of positioning the grinding wheels (3), the secondary actuators (42,43) are controlled so as to bring the abrasive side of the wheels (3) into contact with the blade (2). In practice, the value (b) measures the difference, along the direction (PD) of movement of the primary carriage, between the position of the optical sensor (100) projected on the plane (P2) of the blade (2) and the position of the line (L3) of the grinding wheels (3) projected on the same plane (P2). As previously mentioned, the primary carriage can be made up of only one unit (400), rather than two independent units. In this case, as shown inFIGS.9-12, only one motor (M0) is provided for moving the single unit (400). Also in this case, the primary carriage is provided with the optical sensor (100) which controls its run towards the blade (2) as previously described. In relation to what has been described above, a cutting-off machine for the transversal cutting of logs in accordance with the present invention comprises:a structure (SC) on which are moved the logs to be transversely cut in order to obtain rolls of shorter length;a cutting unit (CU) arranged at a predetermined point of said structure (SC) and comprising a support plate (1) for a blade (2) that is removably connectable to a respective rotary actuator (20) able to determine the rotation of the same blade about its own axis (x-x) at a predetermined speed, said plate (1) being, in turn, constrained to a further actuator which drags it into rotation with a predetermined angular speed about an axis parallel to the rotation axis (x-x) of the blade (2), the blade (2) being arranged along a lying plane (P2) perpendicular to said rotation axis (x-x) at a predetermined position in the cutting unit (CU);at least one sharpening unit with two grinding wheels (3) suitably arranged to sharpen the blade (2) on opposite sides with respect to said plane (P2) and provided with an abrasive side (31), said grinding wheels (3) being circular wheels of predetermined radius;a positioning device for positioning said grinding wheels (3) with respect to the blade (2) arranged on each sharpening unit, by means of which each grinding wheel (3) is placed in a position of contact with the blade (2) in a sharpening step of the latter starting from an initial inoperative position; whereinsaid positioning device comprises a primary carriage (40,41;400) which can be moved along a primary direction (PD) radially with respect to the blade (2), starting from an initial waiting position, by means of one or more primary actuators (M0, M1), and two secondary carriages (42,43) each of which is supported by the primary carriage (40,41;400) and can be moved along a secondary direction (SD) parallel to the rotation axis of the blade (2) by means of two secondary actuators (M2, M3);in a first step of positioning the grinding wheels (3), said one or more actuators (M0, M1) moving the primary carriage (40,41;400) are controlled by a sensor (100) which detects the radius of the blade (2) and stops the primary carriage along the primary direction (PD) when the grinding wheels are arranged with the respective axes, with respect to the axis of the blade, at a distance (k) equal to the radius (r2) of the blade increased by the radius (r3) of the grinding wheels and decreased by a predetermined value (b);in a second phase of positioning the grinding wheels (3), the secondary actuators (42,43) are controlled so as to bring the abrasive side of the wheels (3) into contact with the blade (2). The optical sensor (100) can be replaced by a sensor of another type, for example an inductive sensor or an ultrasonic sensor. The cutting-off machine can also be provided with two sharpening units of the type described above. In this case, the two sharpening units are placed in different positions with respect to the blade (2) for acting each on a different area of the blade. This can be useful in the case of circular blades of large diameter, or of circular blades with bevels of different shapes along the radius, so that each sharpening unit can act on a corresponding area of the blade. Preferably, the two sharpening units are identical to each other. With reference to the example shown inFIG.13, the sensor (100) is associated with a slide (S10) mounted on guides (G10) oriented diagonally with respect to the direction (DS) of movement of a further slide (S1) on which the plate (1) is mounted. In a per se known manner, the plate (1) is moved towards the structure (SC). However, the movement of the plate is a function of the diameter of the blade (2) detected by the sensor (100). In accordance with the present invention, the actual diameter of the blade (2) is used to control the run of the primary carriage (40,41;400), not shown inFIG.13, towards the same blade in the sharpening phase. In all the examples described above, the sensor (100) detects the actual diameter of the blade (2). In fact, the position of the center of the blade with respect to the plate (1) is known and invariable, so that the detection of the cutting edge of the blade corresponds to the detection of the diameter of the latter. Therefore, in accordance with the present invention, the movement of the primary carriage (40,41;400) along the primary movement direction (PD) is always controlled by a sensor which detects the diameter of the blade (2), so that, independently from the diameter of the latter, the grinding wheels (3) are always brought to the correct sharpening position, in which the dipping line of the grinding wheels is tangent to the cutting edge of the blade. In other words, in a first phase of positioning the grinding wheels (3), said one or more actuators (M0, M1) used to move the primary carriage (40,41;400) are controlled by a sensor (100) which detects the radius of the blade (2) and drives the interruption of the run of the primary carriage along the primary movement direction (PD) when the grinding wheels are arranged with their respective axes, relative to the rotation axis of the blade, at a distance (k) equal to radius (r2) of the blade increased by the radius (r3) of the grinding wheels and decreased by a predetermined value (b). It is noted that the radius (r3) of the grinding wheels (3) is a known value. In practice, the details of execution may in any case vary in an equivalent manner as regards the individual elements described and illustrated without thereby departing from the idea of the solution adopted and therefore remaining within the limits of the protection granted by this patent in accordance with the following claims. | 20,264 |
11858162 | DETAILED DESCRIPTION OF THE INVENTION FIGS.5through12schematically show non-limiting embodiments of impellers and components that are capable of use with a variety of cutting machines. The shown impellers and components may be used with any centrifugal-type slicing machine or cutting head. For example, the shown impellers and components may be used with the centrifugal-type slicing machine10depicted inFIG.1and the cutting head ofFIG.2, and, in some instances may be a replacement or a modification of an impeller for such machines and cutting heads. As a matter of convenience, non-limiting embodiments of the impeller and components invention will be illustrated and described with reference to the slicing machine10ofFIG.1equipped with an annular-shaped cutting head12as described in reference toFIGS.1and2. As such, the following discussion will focus primarily on certain aspects of the impeller and components that will be described in reference to the slicing machine10and cutting head12, whereas other aspects not discussed in any detail below may be, in terms of structure, function, materials, etc., essentially as was described in reference to the impeller ofFIGS.1,3, and4. However, it will be appreciated that the following description of the impeller and components are also generally applicable to other types of cutting machines. Moreover, though such machines are particularly well suited for slicing food products, it is contemplated and one of skill will appreciate that the described impellers and components could be used in cutting machines that cut a wide variety of materials. To facilitate the description provided below of the embodiments represented in the drawings, relative terms may be used in reference to the orientation of an impeller within the cutting head12, as represented by the impeller14inFIG.1. On the basis of the coaxial arrangement of the cutting head12and impeller14of the machine10represented inFIG.1, relative terms including but not limited to “axial,” “circumferential,” “radial,” etc., and related forms thereof may also be used below to describe the non-limiting embodiments represented in the drawings. All such relative terms are useful to describe the illustrated embodiments but should not be otherwise interpreted as limiting the scope of the invention. Turning now toFIGS.5,6, and7, an impeller60in accordance with a first non-limiting embodiment of the present invention is shown. Similar to the impeller14ofFIGS.1,3, and4, the impeller60has generally radially-oriented paddles62with faces64that engage and direct material radially outward against knives20of the cutting head12as the impeller60rotates about its axis of rotation. To that end, centrifugal forces created by the rotation of the impeller60cause a product that enters the impeller60to move radially outward, and once the product encounters a paddle62its radially outward movement is directed by the paddle62toward a knife20of the cutting head12. The paddles62shown in the non-limiting embodiment ofFIGS.5,6, and7may be coupled to the lower plate66, the upper plate68, or both. It will be appreciated that, although the paddles62are shown as being disposed between the lower plate and an annular-shaped upper plate68, the upper plate is not necessary and thus, the paddles62will simply be attached to the lower plate66. The impeller60may be configured with individually formed paddles62arranged between a pair of annular-shaped plates66and68. The impeller40and its components can be formed of any suitable material such as stainless steel or manganese-nickel-aluminum-bronze materials in addition to commonly-used MAB alloys and may be cast as integral components of the lower and/or upper plates66and68. It is contemplated that the impeller and its components may be made in any suitable manner and formed with any suitable material that will function for its intended purpose. In the non-limiting embodiments shown inFIGS.5,6, and7, the paddles62may be individually mounted with bolts70and pins72to a corresponding set of mounting holes74provided (e.g., machined) in the plates66and68, though it is also contemplated and understood that any of the paddles62could be directly attached to only one of the lower and upper plates66and68and indirectly attached to the other plate66or68as a result of the lower and upper plates66and68being coupled together, by for example, posts or connecting rods. As shown inFIG.5, additional sets of mounting holes74can be provided in the plates66and/or68to enable different numbers of paddles62to be mounted on the impeller60. The placement (i.e., locations) of the mounting holes74determines the orientation or pitch of each paddle face64relative to a radial of the impeller60terminating at the outermost radial extent of the paddle face64. The placement of the mounting holes74can be chosen so that the pitches of the paddle faces64are negative (the face64of each paddle62does not lie on a radial of the impeller60and the radially innermost extent of each paddle face64is angled away from the direction of rotation of the impeller60relative to a radial of the impeller60), neutral (the face64of each paddle62lies on a radial of the impeller60), or positive (the face64of each paddle62does not lie on a radial of the impeller60and the radially innermost extent of each paddle face64is angled toward the direction of rotation of the impeller60relative to a radial of the impeller60). FIG.6represents an individual paddle62and shows an outer radial extent78of the paddle62in proximity to the perimeter67of the lower plate66. The skilled artisan will appreciate that the location of the individual paddles can vary greatly with respect to the perimeter67so long as the outer radial extent78does not contact the knife. As such, it is contemplated that the outer radial extent78may be located inside, equal to, or outside the perimeter67of the lower plate66, depending on the location of the knife relative to the lower plate66. In one non-limiting embodiment as shown inFIGS.5,6, and7, the outer radial extent78of the paddles62is adjacent, but not contiguous with the perimeter67, of the lower plate66, such that a radial gap or distance exists between the outer radial extent78and the perimeter67. The outer radial extent78of each paddle62may be generally straight and oriented in the axial direction of the impeller60(from top to bottom inFIG.6). Suitable dimensions for the paddle62will depend in part on the size of the food products being processed, and can therefore vary considerably. As shown, the radially innermost extent of each paddle62may curve radially outward as it approaches the upper plate68, though other shapes and profiles are possible, including straight. FIGS.5,6, and7further depict the paddles62as having a generally linear or straight face64, although it is contemplated that the face64may be curved (either concavely or convexly). Further,FIGS.5,6, and7show each paddle62as having the optional feature of axially oriented grooves80, which may inhibit products from rotating while in contact with the paddles62. To this end, it will be appreciated that the face64of the paddle may simply be flat, i.e., without grooves. Or, the face may have grooves provided in an orientation other than axially oriented. The non-limiting embodiment ofFIGS.5,6, and7also depicts the paddles62as being equipped with multiple posts82extending from and spaced along their outer radial extent78, forming multiple gaps84through which foreign debris (which, as used in this description and claims, includes rocks and any other types of contaminants that may accompany and/or be imbedded in a material or product being cut) can pass without damaging the paddles62or the knives20and knife holders30of the cutting head12. The posts82may be replaceable, for example, as a result of being threaded into the outer radial extent78of each paddle62. The posts82may have a generally conical shape and may be angled so that a profile of its conical shape is coplanar with the face64of its paddle62. As evident fromFIGS.5,6, and7, the uppermost extent of each paddle62is shown as lacking a post82but instead, each paddle62has an upper shear edge86(corresponding to the upper shear edge46ofFIG.4) that protrudes from the outer radial extent78of each paddle62. It will be understood, however, that the upper shear edge86may be replaced with a post82. FIGS.5,6, and7also show that the lowermost extent of each paddle62is entirely defined by its outer radial extent78and lacks the lower shear edge48ofFIG.4. In the depicted but non-limiting configuration shown inFIGS.5,6, and7the upper shear edge86and the distal ends of the posts82define the outermost radial extent of the paddle62. Further,FIG.7shows an embodiment where the lowermost extent of the paddle62also lacks a post82, such that a larger gap88exists along the portion of the outer radial extent78below the lowest post82of the paddle62. In addition, in the paddle62shown inFIG.7, the upper shear edge86and the distal ends of the posts82define the outermost radial extent of the paddle62, and the larger gap88defines a lower opening through which relatively large rocks and other foreign debris are able to pass in order to escape around the paddle62and its outer radial extent78. AsFIGS.5,6, and7represent a non-limiting embodiment, it should be understood that other configurations are possible, including the number and locations of the posts82, the inclusion of a lower shear edge (e.g., corresponding to the lower shear edge48ofFIG.4), and the absence of any or all posts and/or shear edges. To that end, and as noted above, the paddles may be provided with an upper shear edge86, a lower shear edge similar to lower shear edge48shown inFIG.4, posts82or, may lack one, some, or all of these features. FIGS.5,6, and7also show multiple exit slots90provided on the lower plate66. The exit slots90may be located and spaced along the perimeter67of the lower plate66in proximity to the radially outermost extent78of each paddle62to create passageways through which rocks and other foreign debris110can pass to exit the impeller60. It is believed that as rocks pass into an exit slot90they may contact the cutting head12at a lower surface of the shoe26and interior surface of the lower ring22. Contact at these more robust surfaces, while undesirable, is less consequential than contact with the relatively more fragile knife20or knife holder30components. As such, the exit slots90provide the capability of avoiding or at least reducing the risk of damage to the paddles62of the impeller60and to the knives20and knife holders30of the cutting head12. Without being bound by any particular theory, it is believed that foreign debris enters the cutting mechanism10in one of two ways. First, and as depicted inFIG.11, the foreign debris110accompanies the food product and, as such, it drops or falls into the central area of the impeller along with the food product. Due to rotational forces, the foreign debris110is generally directed toward the cylindrical wall of the slicer where the foreign debris110can fall through one of the exit slots90. Second, and as depicted inFIG.16, the foreign debris110may be imbedded within the food product. As the food product is being sliced, the foreign debris110is revealed and freed from the interior of the food product at which time it is able to drop to the lower plate66to meet the exit slot90, which is rotating or moving in a direction toward the foreign debris110.FIG.16shows one potential path of movement of the foreign debris110released from the food product. In this instance, the foreign debris110moves from the food product through a gap between two adjacent posts82of a paddle62where the foreign debris110contacts the top surface66A of the lower plate66, contacts the front face64of an adjacent paddle and then falls through an exit slot90. In each instance it is thought that the foreign debris110may bounce off one of more of the surfaces of the shoe26, the lower plate66, the paddle62, and/or the paddle front face64, and in some instances, the knife20and/or knife holder30, before the foreign debris110passes through one of the exit slots90and is ejected from the cutting mechanism10. One of skill will appreciate that the presence of the exit slots90will minimize the amount or degree of damage, particularly to the knife20and/or knife holder30. InFIGS.5,6, and7the exit slots90may be selectively located to accommodate various alternative locations of the paddles62enabled by their mounting holes74, whether the paddles62are present or not. In the absence of such alternative locations, it is foreseeable that each exit hole90may be associated with a single paddle62. In some instances, the exit slots90may be formed so that at least one exit slot90is entirely located to one side of each paddle62on which the face64is formed. In other instances, one or more of the exit slots90may be formed such that the face64of the paddle62, or a portion of the face64of the paddle62, overhangs the exit slot90. It will also be appreciated that the exit slots90intersect the perimeter67of the lower plate66, and extend radially inward. In general, the exit slots90have a structural configuration such that the exit slot90, or a portion of the exit slot90intersects the perimeter67of the lower plate66. In that regard, the specific structural configuration of the exit slot90itself or in conjunction with the structural configuration of the shoe26is such to encourage the foreign debris100to encounter and pass through an exit slot90before damaging the knife20and/or the knife holder30. As one example, the exit slot90may be chamfered, i.e., configured with a shape that tapers outwardly from the top surface66A to the bottom surface66B of the lower plate66. Turning now toFIG.8, a detail view of one embodiment of an exit slot90is shown. The exit slot90includes a wall94adjacent the perimeter67of the lower plate that joins with an arcuate radially innermost wall92that terminates near the perimeter67of the lower plate66to define what may be described as a “hook”96that defines a protrusion98(which in this instance is rounded) that projects toward the wall94, creating what may be described as a neck91of the slot90between the rounded protrusion98and the wall94. The hook96and its protrusion98may help capture foreign debris110to reduce the risk that such foreign debris110may become wedged between the impeller60and a knife20or knife holder30of the cutting head12. A similar exit slot90structure is depicted inFIG.17. It is also believed that the trajectory of the foreign debris110will cause the foreign debris110to encounter the exit slot90passing through the neck between the wall94and protrusion98, or contacting the protrusion98before being deflected downward through the slot90. As such, the edge condition of the slot90defined by the walls92and94, in particular, their angle relative to the upper surface66A of the lower plate66, may be tailored to promote the foreign debris110dropping down through the slot90if the rock were to impact the walls92and94or the protrusion98, instead of bouncing out of the slot90. Optimal angles for the walls92and94foreseeably depend on the size and shape of the slot90, the size and mass of the foreign debris110, and rotational speed of the impeller60. In some embodiments, the arcuate wall92may have a radius of about 0.25 inch (about 0.6 cm) and the circumferential distance between the wall94and protrusion98, i.e., the neck of the slot90, may be about 0.375 inch (about 1 cm). In the embodiment shown inFIG.8, the walls92and94of each slot90are inclined so that the lower exit of the slot90at the lower surface66B of the lower plate66is larger than the upper entrance of the slot90at the upper surface66A of the lower plate66to promote egress of foreign debris100from the impeller60through the slot90. ExaminingFIG.8, it may be appreciated that the walls92and94and the protrusion98of the depicted slot90are defined by multiple wall surface regions having different orientations relative to each other. The particular shape of the slot90represented inFIG.8can be produced by a multi-step machining process to size, shape, and orient the exit slots90and their walls92and94to promote exiting of foreign debris through the exit slots90. Because the exit slots90serve a different function from the mounting holes74they differ from the mounting holes74in terms of their size, shape, and/or locations on the lower plate66of the impeller60. For example and as shown inFIG.5, the exit slots90pass through the lower plate66between upper and lower surfaces66A and66B of the plate66and intersect the perimeter67of the lower plate66; whereas the mounting holes74do not intersect the perimeter. Various shapes and sizes are foreseeable for the exit slots90. For example, the exit slots90may have an oblong shape with its major dimension oriented in a radial direction of the lower plate66. Alternatively,FIGS.9and10schematically show a configuration for the lower plate66of the impeller60with exit slots90. The lower plate66ofFIGS.9and10differ from, for example the lower plate66shown inFIG.5, as a result of being fabricated to have exit slots90that are equally circumferentially spaced along the entire perimeter67. Other than being equally spaced instead of selectively located, the exit slots90can be similarly sized and shaped as the exit slots90shown in the other figures. Other aspects of the embodiment ofFIGS.9and10can be, in terms of structure, function, materials, etc., essentially as was described for the embodiment ofFIGS.5through8. Another example of a configuration of an exit slot90in conjunction with a paddle62is shown inFIG.15. In this arrangement, the exit slot90is in the shape of a rectangle with its major dimension or long side oriented in a radial direction of the lower plate66. In this particular view, the exit slot is located adjacent the front face64of the paddle62with its long side oriented parallel to the front face64of the paddle62and with a portion of the top surface66A of the lower plate located between the front face64and the edge of the exit slot90. As noted above, the edge of the exit slot may be contiguous with the front face64of the paddle62. Alternatively, the front face64or a portion of the front face64of the paddle62may overhang a portion of the exit slot. In addition, as depicted inFIG.15, the major dimension of long side of the exit slot extends from the perimeter67to the radially inward-most portion of the paddle, i.e., the exit slot90extends the entire length of the front face64of the paddle. It is contemplated that the exit slot extends only a portion of the length of the front face64of the paddle. The width of such an exit slot90shown inFIG.15, or a similarly shaped exit slot, should be such that foreign debris110can pass through the exit slot90but that the material that is being sliced (e.g., the potato that is being sliced into pieces) does not pass through the exit slot90. Accordingly, it is desirable to provide an exit slot90width or opening that is about less than one half of the size of the material being sliced. In some instances, the exit slot90width is about 0.5 inch to about 0.625 inch. While the shape of the exit slots90and their location on the lower plate66with respect to the paddles66will aid in providing a suitable egress for foreign debris110, it will be appreciated that the shape of the walls defining the exit slots90may also help to direct the foreign debris110away from the knife20and knife holder30and to retain or encourage movement of the foreign debris110through the exit slot90. To that end and as explained above with respect toFIG.8, the walls may be chamfered, tapered outwardly, or have multiple wall surface regions having different orientations relative to each other. In some embodiments, one or both of the walls92and94of the exit slots90may be orthogonal or perpendicular to the upper surface66A of the plate66as depicted in, for example,FIGS.14and15. Turning toFIG.11, a proposed exemplary trajectory of foreign debris (e.g., a rock)110is shown. It will be seen that the foreign debris110vertically enters the impeller60ofFIGS.5through7along its axis of rotation toward a central zone69of the upper surface66A of the lower plate, travels across the upper surface66A of the lower plate66under the influence of centrifugal forces generated by the rotation of the impeller60, and then through one of the exit slots90in the lower plate66. It will be appreciated that the depicted travel of the foreign debris110shown inFIG.11is merely representative and that the foreign debris110may contact the upper surface66A at different locations, more or less than shown, may contact the paddle62, knife20, shoe26, holder30, etc. before exiting through the exit slot90. It is also contemplated that in addition to the configuration of the exit slots90and their location relative to the paddles, that the configuration of the wall or shoe26, or lower support ring22of the cutting head12can be configured to encourage foreign debris110to exit. In that regard,FIGS.12,13, and14show alternative structures of the shoe26in connection with an anticipated direction of movement of foreign debris110through the exit slot90. In particular,FIG.12shows the lower end of the shoe26extending from above the top surface66A of the lower plate66to at least the bottom surface66B. FIG.13shows that the lower end of the shoe26is tapered or chamfered to provide an angled surface that if the foreign debris contacts the angled surface the foreign debris will be directed downward through the exit slot90.FIG.14shows that the lower end of the shoe26is at or slightly above the top surface66A of the lower plate to provide a larger egress path for the foreign debris110. Turning now toFIG.17, another example of a configuration of an exit slot90associated with a paddle62is shown. In this embodiment, the exit slot90includes a wall94adjacent the perimeter67of the lower plate that joins with an arcuate radially innermost wall92that terminates near the perimeter67of the lower plate66to define what may be described as a “hook”96that defines a protrusion98that projects toward the wall94, creating what may be described as a neck91of the slot90between the protrusion98and the wall94. The hook96and its protrusion98may help capture foreign debris110to reduce the risk that such foreign debris110may become wedged between the impeller60and a knife20or knife holder30of the cutting head12. Similar to the configuration of the exit slot90shown inFIG.8, the configuration of the exit slot90shown inFIG.17will, in view of the direction of rotation (clockwise in bothFIGS.8and17), the neck91may act like a scoop to “pull” debris radially inward away from the shoe26, lower support ring22, etc. into the cavity of the slot90that is generally located beneath the paddle62. It is also contemplated that the bottom portion63of the paddle62, i.e., the portion of the paddle62that is adjacent the top surface66A of the lower plate66may be shaped to encourage movement of foreign debris110into the exit slot90. As one example of such,FIG.18shows a paddle62where the bottom portion63is tapered or angled in a manner to direct foreign debris110encountering the bottom portion63toward and through the exit slot90. In some instances the bottom portion63is tapered at an obtuse angle with respect to the lower plate66. While the invention has been described in terms of specific or particular embodiments, it should be apparent that alternatives could be adopted by one skilled in the art. For example, the machine10, cutting head12, impeller60, and their respective components could differ in appearance and construction from the embodiments described herein and shown in the drawings, functions of certain components of the machine10, cutting head12, and/or impeller60could be performed by components of different construction but capable of a similar (though not necessarily equivalent) function, and various materials could be used in their fabrication. In addition, the invention encompasses additional or alternative embodiments in which one or more features or aspects of a particular embodiment could be eliminated or two or more features or aspects of different disclosed embodiments could be combined. Accordingly, it should be understood that the invention is not necessarily limited to any embodiment described herein or illustrated in the drawings. It should also be understood that the purpose of the above detailed description and the phraseology and terminology employed therein is to describe the illustrated embodiments, and not necessarily to serve as limitations to the scope of the invention. Finally, while the appended claims recite certain aspects believed to be associated with the invention, they do not necessarily serve as limitations to the scope of the invention. | 24,963 |
11858163 | Similar or functionally equivalent features are provided in the figures with corresponding reference signs. The drawings are not intended to be limiting in any way, and it is contemplated that various embodiments of the invention may be carried out in a variety of other ways, including those not necessarily depicted in the drawings. The accompanying drawings incorporated in and forming a part of the specification illustrate several aspects of the present invention, and together with the description serve to explain the principles of the invention; it being understood, however, that this invention is not limited to the precise arrangements shown. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The following description of certain examples of the invention should not be used to limit the scope of the present invention. Other examples, features, aspects, embodiments, and advantages of the invention will become apparent to those skilled in the art from the following description, which is by way of illustration, one of the best modes contemplated for carrying out the invention. As will be realized, the invention is capable of other different and obvious aspects, all without departing from the invention. Accordingly, the drawings and descriptions should be regarded as illustrative in nature and not restrictive. Various suitable ways in which the teachings herein may be combined will be readily apparent to those of ordinary skill in the art in view of the teachings herein. Such modifications and variations are intended to be included within the scope of the claims. First Shrink Label FIGS.1and2show views of a receptacle assembly having a first shrink label made according to embodiments of the invention. Referring toFIGS.1and2, a shrink label20is shown attached to a bottle10. In the illustrated embodiment, bottle10includes a body12defining an interior space for storing product. A top end portion of body12includes a neck14having an opening (not shown) to provide access to the product within body12. Neck14is threadably coupled with cap16. In some other versions, cap16may be coupled with bottle10using other suitable configurations, such as a snap fit, a friction fit, a living hinge, etc. Cap16includes a trigger18and a nozzle19such that a user may squeeze trigger18to pivot trigger18inwardly towards bottle10to thereby dispense the product within body12through nozzle19. Shrink label20is shown positioned about bottle10. Shrink label20may comprise any suitable plastic material that is configured to shrink and thereby form to bottle10when heated. Shrink label10comprises a lower portion22and an upper portion24separated by a first perforation line26. Lower portion22of shrink label20thereby extends below first perforation line26to cover at least a portion of body12of bottle10. Lower portion22may have a length of about 182 mm and a width of about 184 mm, but any other suitable dimensions may be used. In the illustrated embodiment, lower portion22extends to a bottom end portion of bottle10. In some other versions, lower portion22only extends to cover a portion of body12. Upper portion24of shrink label20extends above first perforation line26to cover at least a portion of cap16of bottle10. Upper portion24may have a length of about 101 mm and a width of about 184 mm, but any other suitable dimensions may be used. In the illustrated embodiment, upper portion24extends to a top end portion of cap16such that upper portion24is configured to enclose nozzle19. In some other versions, upper portion24only extends to cover a portion of cap16. First perforation line26is positioned to extend circumferentially about shrink label20near neck14of bottle10. In the illustrated embodiment, first perforation line26is positioned just below neck14, but in other versions first perforation line26may be positioned at or above neck14. First perforation line26also extends continuously about the entire circumference of shrink label20. In some other versions, first perforation line26extends about only a portion of shrink label20. Shrink label10further comprises one or more second perforation lines28extending transversely relative to first perforation line26, through upper portion24of shrink label20, from first perforation line26to a top portion of bottle10. As shown inFIG.3, second perforation line28may be oriented obliquely relative to first perforation line26in an open configuration such that second perforation line28is oriented substantially vertical after shrink label20has been applied to bottle10, as shown inFIGS.1and2. First and second perforation lines26,28thereby allow a user to remove upper portion24of shrink label20prior to use of bottle10. Referring toFIGS.1and2, shrink label20further comprises an opening29extending through upper portion24to allow trigger18of bottle10to be exposed through opening29. Referring toFIG.3, shrink label20has a third perforation line27extending along upper portion24transverse to first perforation line26. In the illustrated embodiment, third perforation line27extends along only a portion of upper portion24such that third perforation line27is positioned above first perforation line26and below a top end surface of upper portion24. Third perforation line27may have a length of about 55 mm, and be positioned about 16 mm above first perforation line26and about 30 mm below the top end surface of shrink label20, but any other suitable dimensions can be used. The length of third perforation line27thereby generally corresponds to the length of trigger18, but any other suitable lengths can be used. Third perforation line27is formed such that third perforation line27is configured to rip more easily than First and second perforation lines26,28. Accordingly, when shrink label20is heated and shrunk about bottle10to form to bottle10, third perforation line27breaks along third perforation line27to form opening29. Trigger18of bottle10can thereby extend through opening29. Additionally or alternatively, third perforation line27may be formed as a slit instead of perforations such that the slit is configured to expand as shrink label20is applied to bottle10. Still other suitable configurations for third perforation line27will be apparent to one with ordinary skill in the art in view of the teachings herein. Use of First Shrink Label FIG.3shows the shrink label20of the receptacle assembly described above in an expanded (i.e. not shrunk) configuration. A use of the shrink label20to cover a bottle, thus forming the receptacle assembly ofFIG.1, is described in the following. To apply shrink label20to bottle10, bottle10may be positioned within shrink label20in an open configuration, as shown inFIG.3. For instance, body12of bottle10may be aligned with lower portion22of shrink label20, neck14of bottle10may be aligned near first perforation line26of shrink label20, and cap16of bottle may be aligned with upper portion24of shrink label20to position trigger18adjacent with third perforation line27. Energy such as heat may then be applied to shrink label20such that shrink label20shrinks to form to bottle10, as shown inFIGS.1and2. As shrink label20forms to bottle10, third perforation line27breaks apart to form opening29to allow trigger18to extend through opening29while enclosing nozzle19. Shrink label20thereby holds the position of the cap16relative to the bottle10to further secure the cap16with the bottle10. This prevents the cap16from rotating and/or loosening relative to the bottle10to prevent product from leaking from the bottle10. Opening29of shrink label20may also inhibit shrink label20from incidentally leaking product during the heat shrink process. For instance, as shrink label20is applied to bottle10, compressive forces from shrink label20may pivot trigger18to incidentally leak product from bottle10. Opening29thereby allows trigger18to extend through opening29to inhibit shrink label20from pivoting trigger18and incidentally leaking product. A user may then pull downward on upper portion24of shrink label20to break upper portion24along second perforation line28down to first perforation line26. Upper portion24of shrink label20can then be ripped along first perforation line26to remove upper portion24of shrink label20from bottle10. Cap16may thereby be exposed to allow product to be dispensed from the bottle10. Still other suitable configurations for shrink label20will be apparent to one with ordinary skill in the art in view of the teachings herein. For instance, in some versions, shrink label20may comprise a pull-tab to aid in removing a portion of the shrink label20. Second Shrink Label A second shrink label made according to a first embodiment of the invention is shown inFIG.4and differs from the first shrink label20in the following. The second shrink label20A has a single slit27A instead of the third perforation line27. In this case it is not necessary to break apart any perforation line to form the opening29. When the shrink label shrinks (e.g. upon applying energy such as heat) and/or when the trigger penetrates the slit, the slit widens to form the opening29having smooth sides. The slit27A and above-mentioned third perforation line27are each examples of a puncture. The puncture may extend along any or both of: at least parts of the upper portion24, and at least parts of the lower portion22. The puncture may extend at least partially longitudinally. The first26A and second28A perforation lines correspond to the perforation lines26and28of the first shrink label20. The shrink label20A can be used in place of the shrink label20for covering the bottle10. Third Shrink Label FIG.5shows a view of another receptacle assembly, which has a third shrink label made according to embodiments according to the invention.FIG.6shows the shrink label20B for the receptacle assembly ofFIG.5in an expanded (i.e. not shrunk) configuration. Referring toFIG.5, a shrink label20B is shown attached to a bottle100including a body120defining an interior space for storing product. A cap160is fixed to an opening in a neck140of the bottle at the top of the bottle, such as by screwing, snap fit, a friction fit, a living hinge, etc. Cap160includes a pump mechanism which has a plunger180and a nozzle19. A user can push down on the plunger180to dispense product within body120through nozzle190. Shrink label20B is shown positioned about bottle100. The shrink label20B may be made from the same materials as the first20and second20A shrink labels. Shrink label10comprises a lower portion22B and an upper portion24B separated by a first perforation line26B. Lower portion22B of shrink label20B thereby extends below first perforation line26B to cover at least a portion of body120of bottle100. Lower portion22B extends to a bottom end portion of bottle100. In some other versions, lower portion22B only extends to cover a portion of body120. Upper portion24B of shrink label200extends above first perforation line26B to cover at least a portion of cap160of bottle100. Upper portion24B extends to a top end portion of cap160. First perforation line26B is positioned to extend circumferentially about shrink label20B near neck140of bottle100. First perforation line26B is positioned just below neck140, but in other versions first perforation line26B may be positioned at or above neck140. First perforation line26B also extends continuously about the entire circumference of shrink label20B. In some other versions, first perforation line26B extends about only a portion of shrink label20B. Shrink label100further comprises one or more second perforation lines28B extending transversely relative to first perforation line26B, through upper portion24B of shrink label20B, from first perforation line26B to a top portion of bottle100. First and second perforation lines26B,28B correspond to the first26,26A and second perforation lines28,28A of the first20and second20A shrink labels. They equally allow a user to remove upper portion24B of shrink label20B prior to use of bottle100. Referring toFIGS.5and6, shrink label20B further comprises an opening29B extending through upper portion24B to allow nozzle190of bottle100to be exposed through opening29B. Referring toFIG.6, shrink label20B has a puncture27B extending along upper portion24B parallel to first perforation line26B. The puncture comprises a central portion271B formed as a (non-perforated) slit, and a respective perforated portion272B,273B on either side of the central portion271B. The central portion271B may directly join and/or be aligned with the perforated portions272B,273B. Puncture27B extends along only a portion of upper portion24B and puncture27B is positioned above first perforation line26B and below a top end surface of upper portion24B. The width of puncture27B thereby generally corresponds to the width of the nozzle190, but any other suitable lengths can be used. Puncture27B is configured to rip more easily than first and second perforation lines26B,28B. Accordingly, when shrink label20B is heated and shrunk about bottle100to form to bottle100, puncture27B breaks at its perforated portions272B,272C to form opening29B. Nozzle190can thereby extend through opening29B. Additionally or alternatively, puncture27B may be formed entirely as a slit. Still other suitable configurations for puncture27B will be apparent to one with ordinary skill in the art in view of the teachings herein. In typical examples the puncture27B is positioned closer to one end of the shrink label than the punctures27and27A are. Use of Third Shrink Label A use of the shrink label20B to cover a bottle, thus forming the receptacle assembly ofFIG.5, is described in the following. To apply shrink label20B to bottle100, bottle100may be positioned within shrink label20B in an open configuration, as shown inFIG.6. For instance, body120of bottle100may be aligned with lower portion22B of shrink label20B, neck140of bottle100may be aligned near first perforation line26B of shrink label20B, and cap160of bottle may be aligned with upper portion24B of shrink label20B to position nozzle190adjacent with puncture27B. Energy such as heat may then be applied to shrink label20B such that shrink label20B shrinks to form to bottle100, as shown inFIGS.5and6. As shrink label20B forms to bottle100, the perforated end portions272B,273B of the puncture27B break apart to form opening29B wider than the central portion271B, to allow nozzle190to extend through opening29B. Shrink label20B thereby holds the position of the cap160relative to the bottle100to further secure the cap160with the bottle100, especially because the shrink label20B has unpunctured regions above and below the nozzle190. This more securely hinders the cap160from rotating and/or loosening relative to the bottle100to hinder product from leaking from the bottle100. The shrink label20B may function also as a tamper-evident seal. A user may then pull downward on upper portion24B of shrink label20B to break upper portion24B along second perforation line28B down to first perforation line26B. Upper portion24B of shrink label20B can then be ripped along first perforation line26B to remove upper portion24B of shrink label20B from bottle100. Cap160may thereby be exposed to allow the user to twist the cap160, thus activating the spring-loaded plunger by first extending it; by pushing on the extended plunger product can be dispensed from the bottle100. Still other suitable configurations for shrink label20B will be apparent to one with ordinary skill in the art in view of the teachings herein. For instance, in some versions, shrink label20B may comprise a pull-tab to aid in removing a portion of the shrink label20B. So a shrink label for application to a bottle, comprises a lower portion and an upper portion, wherein the lower portion is configured to be applied to at least a portion of a body of the bottle, wherein the upper portion is configured to be applied to at least a portion of a cap of the bottle, wherein the upper portion comprises a puncture such as a perforation line extending along a portion of the upper portion, wherein the puncture is configured to expand when the shrink label is applied to the bottle to form an opening, wherein a protrusion of the cap such as a trigger or a nozzle is positioned to extend through the opening. So a method of applying a shrink label to a bottle, wherein the bottle comprises a body and a cap having a protrusion such as a trigger or a nozzle, comprises a lower portion and an upper portion having a puncture extending along a portion of the upper portion, the method comprising the steps of: positioning the shrink label about the body such that the lower portion of the shrink label is aligned with at least a portion of the body and the upper portion of the shrink label is aligned with at least a portion of the cap, wherein the puncture is positioned adjacent to the protrusion; applying heat to the shrink label to form the shrink label to the bottle; expanding the puncture to form an opening in the shrink label; and positioning the protrusion through the opening. In the case that any or both of the first26A and second28A perforation lines may be omitted, even here the following advantages can still be achieved: the trigger is positioned through the opening, reducing leakage through an undesired trigger actuation; the nozzle is covered by the shrink label, further reducing leakage; the shrink label covers (is shrunk around) at least part of the cap and at least part of the body with a close fit, reducing undesired loosening of the cap; this hinders leakage; the shrink label has a larger surface area (advantageously a large design area can be implemented). The compound slit27B may be replaced with a puncture comprising only a transverse slit or only a transverse perforation. First Embodiment—Apparatus In the following an apparatus according to a first embodiment is described. The apparatus is for making the second shrink label20A. As shown schematically inFIG.7, the apparatus30comprises a mandrel32configured to receive a continuous strip36of flexible tubular material. The mandrel32is essentially columnar and has a vertical and stationary longitudinal axis. A top portion (not shown) of the mandrel32has a spreading element known in the art which can convert a strip of flexible tubular material from a flat form to an open form. The lower portion of the mandrel is shown inFIG.7and has a circular section. In variants of the present embodiment the mandrel may have any one or more of a circular, oval, polygonal and plate-like section. The strip36can be introduced to the mandrel32from a roll (not shown) which is prepared in advance. The lower portion of the mandrel may be formed from a sleeve shot part of the mandrel. The apparatus30comprises advancing means which are known in the art and not shown, for feeding the strip36. The advancing means may comprise one or more first pairs of rollers which engage with the inner and outer faces of the wall of the strip36. In this way the strip36can be fed onto the mandrel32from the mandrel's upper end. The advancing means may comprise one or more second pairs of rollers which engage with the inner and outer faces of the wall of the strip36at a lower position on the mandrel32than the first pairs of rollers. In this way a sleeve cut from the strip36can be fed from the lower end of the mandrel to another production station. One or more of the rollers may be driven by an electric motor. One roller in each pair of rollers may be accommodated in recesses (not shown) provided in the mandrel32. The mandrel32may be supported by some of the rollers. The apparatus30comprises a slitting blade40, as a puncturing means, driven by a puncturing mechanism (not shown). More specifically, the mechanism moves the blade40toward and away from the mandrel in the radial directions shown by the double-headed arrow42. In the present embodiment the movement is linear but in other embodiments the movement may comprise any or more of a linear, circular, and elliptical motion. For example the blade40may rotate about an axis: specifically the blade40may be a rotating (spinning) blade whose cutting profile is eccentric to its axis of rotation. The movement allows the blade40to penetrate the strip36to make a slit in a region of the strip36. The motion of the blade40may be driven any one or more of: electrically, pneumatically, and hydraulically. For example the blade40may be driven by an electric motor such as a servomotor. In a preferable embodiment the blade40is driven by a reciprocating pneumatic actuator. The blade40punctures the wall-thickness of the shrink label20A from one (outer) side while the mandrel32(opening device) supports the strip36from the other (inner) side. The slit is preferably straight and/or longitudinal. The slit may have a length (longitudinal extent) greater than or equal to any of: 2 mm, 5 mm, 10 mm, 15 mm, 20 mm, 25 mm, and 55 mm. The blade40in the present embodiment has a triangular shaped profile. The blade profile may alternatively comprise a plurality of triangular shapes arranged in a vertical line. The leading vertex of the triangular shape may have an angle of 45 degrees. The blade40is configured to create a continuous slit of predetermined length in the strip36. By providing more than one triangular shape, such as a saw-tooth like profile, the required stroke of the blade40for a given slit length can be kept short. The mandrel32comprises a recess44on its surface for receiving the blade40. The blade40can enter the recess44which is shaped in correspondence with the triangular profile of the blade40. The recess may have other shapes, such as a straight or circular shape, and may extend in a peripheral direction at least partly around the mandrel32. The recess44may be a through-hole. The recess may be configured so that the puncturing means40does not contact the mandrel32. The apparatus30comprises a rotary cutter46, as a cutting means, comprising one or more blades that can move toward and away from the mandrel in the radial directions shown by the double-headed arrow43so as to cut the strip36peripherally, above the blade40. A rotary cutter known in the art can be used here. The rotary cutter46may be configured to make a peripheral cut at least partly during the puncturing of the strip36. The assembly30has a perforation means50, comprising perforating blades, for making the first and second perforation lines. The perforation means50may comprise a respective perforating blade or blades for each perforation line and/or it may comprise a common perforating blade. The perforation means50is configured to reciprocatingly puncture a portion or portions of the strip36upstream of the mandrel32, the directions of reciprocation being shown by the double-headed arrow52. The perforation means50is thus configured to puncture a flattened portion of the strip36, for example by applying a perforation means commonly known in the art. The perforation means50may be provided downstream of the mandrel. First Embodiment—Method In the following a method according to the first embodiment is described.FIGS.10A to10Erepresent procedures of the method which are performed using the apparatus30described above and shown inFIG.7. Step A: As shown inFIG.10Aa strip36of tubular material is fed onto the mandrel32from the top portion of the mandrel32, for example by means of the first roller pairs of the advancing means (not shown). The position inFIG.10Amay be at least partially achieved by cutting and removing a previously made shrink label from the strip36. Step B: The strip36is fed (such as by continuing the feeding in Step A) until the free end33of a free-end portion34of the strip36reaches a predetermined position along the mandrel32, such as the bottom end of the mandrel32(FIG.10B). Thus a region of the strip becomes an open region held open and/or supported by the mandrel32. At this point the feeding is stopped. Step C: Subsequently a puncturing step is performed (FIG.10C), wherein the slitting blade40penetrates the strip36and enters the recess44. The extents of the puncture27A may be formed in correspondence with the extents of the blade40. In other words a longitudinally upper extent of the puncture27A may be formed by (or in correspondence with) the longitudinally upper extent of the blade40, and a longitudinally lower extent of the puncture may be formed by (or in correspondence with) the longitudinally lower extent of the blade40. So the strip36is punctured at a first distance from its free end33. Longitudinal end portions of the sleeve are not punctured in making the puncture. The first distance may be measured as the maximum longitudinal extent of the puncture from the free end. The strip36is cut about its periphery, optionally during the puncturing (FIG.10C). The rotary cutter46may be configured to start and optionally complete the cutting before the puncturing. In this case the blade40punctures a shrink label20A that is already cut from the strip36. So the strip36is cut at a second distance from its free end33, the second distance being greater than the first distance. Step D: After cutting, the rotary cutter46is retracted. After puncturing, the blade40is retracted (seeFIG.10D). The end portion of the strip36forms the end portion of the shrink label20A. The free end33becomes an open end of the shrink label20A. In summary of the above the sleeve is made from the strip of tubular material by: feeding a free-end portion of a given length of the strip onto a columnar mandrel; puncturing the free-end portion at a first distance from its free end, transversely to the feeding direction, to form a puncture; and cutting the free-end portion from the strip at a second distance from its free end, the second distance being greater than the first distance. Step E: Subsequently the shrink label20A is advanced (FIG.10E) in the direction of the solid arrow ofFIG.10Eby means of the second pairs of rollers so that it leaves the mandrel32from the mandrel's bottom end. The not-shown advancing means may transfer the newly-made free-end portion48of the strip36to the predetermined position at the same time that, or shortly after, the shrink label20A is transferred from the mandrel32. Preferably the shrink label20A is transferred from the mandrel32to a receptacle (not shown inFIG.10A to10E), which is further preferably positioned under the mandrel32. The method comprises a step (not shown inFIGS.10A to10E) of perforating a flattened portion of the strip36, upstream to the mandrel, to form the first26and the second28perforation lines, by means of the perforation means50. The first26and the second28perforation lines are provided on each shrink label20A. Because the puncturing is performed on a portion of the strip36, and the portion is provided on the mandrel32, accuracy and repeatability are improved. Also the blade40is prevented from puncturing the wall of the strip36twice, as would be the case if the strip36were punctured in a flattened state. Puncturing is performed by moving the blade40in a plane parallel to the mandrel axis. So accuracy and repeatability of the puncturing can be further improved. Further preferably the slit (substantially) coincides in the peripheral direction with a longitudinal crease in the tubular material. The crease may be formed when the tubular material is flat. So insertion of a protrusion on a receptacle is facilitated. By forming the first and second perforation lines on a flattened part of the strip36it becomes easy to form pairs of parallel perforations, or to form a peripheral perforation. Advantageously, by puncturing at the same time as cutting, the processing time can be short. The not-shown advancing means, the blade40, and the rotary cutter46may be configured to cooperate with timed fashion by means of a not-shown control unit. When the mandrel32is arranged with its axis vertical, then the free end33; longitudinally lower extent of the puncture27A; longitudinally upper extent of the puncture27A; and location of cutting are longitudinally spaced from each other in that order. Thus the puncture27A is surrounded on all its sides by unpunctured material. The mandrel32may be stationary (e.g. rotationally stationary) during the puncturing and/or cutting. The strip36may be stationary relative to the mandrel32during the puncturing such as at least rotationally stationary. Alternatively or in addition at least parts of the puncturing and the feeding may coincide. The method may be adapted to make a shrink label made according to the first embodiment by replacing the slitting blade40described above with a perforating blade. The sleeve may be punctured more than once and/or have more than one puncture. The method may be adapted to make a shrink label of any of the modified first and second embodiments described above by omitting perforating the strip36with the first26and/or the second28perforation lines. Second Embodiment—Apparatus In the following an apparatus according to a second embodiment of the invention is described, with the help ofFIGS.8and9and by comparison with the apparatus of the first embodiment. As shown schematically inFIG.8, the apparatus230differs from the apparatus30of the first embodiment in that the blade40of the apparatus30is replaced with a blade240shown in greater detail inFIG.9. The apparatus230is for making the third shrink label20B. The cutting edges of the blade240extend transversally to the longitudinal direction.FIG.9shows a detail of the blade240when the configuration ofFIG.8is viewed from above. The blade comprises a central portion240aas a slitting portion, and a respective perforation portion on each side of the central portion. Each perforating portion240b,240ccomprises a perforating blade adjoining the central portion240a. So the blade240may be described as a compound blade. The apparatus230according to the second embodiment differs from the apparatus230of the first embodiment also in that its mandrel232is longer than the mandrel32of the first embodiment by a length shown as “L” inFIG.8, and in that it is configured to perform the method of the second embodiment which is described in the following. Second Embodiment—Method In the following a method according to the second embodiment is described. The method is for making the third shrink label20B using the apparatus230of the second embodiment.FIGS.11A to11Hrepresent stages of the method which is performed using the apparatus230. Step A: As shown inFIG.11Aa strip236of tubular material is fed onto the mandrel232from the top portion of the mandrel232, for example by means of the (not shown) first roller pairs of the advancing means. The strip may be provided from a roll of tubular material. The strip opens as it passes over the mandrel232. The position inFIG.11Amay be at least partially achieved by cutting and removing a previously made shrink label from the strip236. Step B: The strip236is fed (such as by continuing the feeding in Step A) until a free end233of a free-end portion234of the strip236reaches a predetermined position along the mandrel232, such as a position above the bottom end of the mandrel232(FIG.11B). Thus a region of the strip becomes an open region held open and/or supported by the mandrel232. Step C: Subsequently a cutting step is performed (FIG.11C), wherein the strip236is cut about its periphery, in particular without simultaneously performing a puncturing step. So the strip236is cut at a second distance from its free end233by the rotary cutter (cutting means)246which can reciprocate in the direction of the arrows243. Step D: After cutting, the rotary cutter246is retracted (FIG.11D). The free-end portion234forms an end portion of the shrink label20B, and. The free end233becomes an open end of the shrink label20B. Step E: Subsequently, as shown inFIG.11E, the length of cut strip is advanced (fed) in the direction of the arrow ofFIG.11Eby means of further pairs of rollers (not shown) so that it arrives at a puncturing position shown inFIG.11F. So a length of cut strip (unpunctured shrink label20B) is fed until the free end233of a lower free-end portion234of the cut strip reaches a predetermined position along the mandrel232, such as the bottom end of the mandrel232. Thus the length of cut strip is held open and/or supported by the mandrel232. The feeding is stopped. Step F: Subsequently a puncturing step is performed (FIG.11F), wherein the blade240, by moving in the direction of the arrow242, penetrates the cut strip and enters the recess244. The extents of the puncture27B may be formed in correspondence with the extents of the blade240. In other words a first transverse extent of the puncture27B may be formed by (or in correspondence with) a first transverse extent of the blade240, and a second transverse extent of the puncture may be formed by (or in correspondence with) the second transverse extent the blade240. So the length of cut strip236is punctured at a first distance from its free end233, being less than the second distance. Thus a shrink label20B is formed. Longitudinal end portions of the shrink label20B are not punctured in making the puncture. The puncture27B extends through only a part of the circumference of the shrink label20B. In summary of the above the shrink label20is made from the strip236of tubular material by: feeding a free-end portion234of a given length of the strip236onto a columnar mandrel232; cutting the free-end portion234from the strip236at a second distance from its free end233; advancing the cut length of strip236along the mandrel; and puncturing the cut length of strip at a first distance from its free end233, to form a puncture27B, the second distance being greater than the first distance. Step G: Subsequently the slitting blade240is retracted (FIG.11G). Step H: Subsequently the shrink label20B is advanced (FIG.11H) in the direction of the arrow ofFIG.11Husing the advancing means (not shown) so that it leaves the mandrel232from the mandrel's bottom end, such as by means of the second roller pairs (not shown). The advancing means may transfer the newly-made free-end portion248of the strip236to the predetermined position at the same time that, or shortly after, the shrink label20B is transferred from the mandrel232. Preferably the shrink label20B is transferred from the mandrel232to a receptacle (not shown inFIGS.11A to11H), which is further preferably positioned under the mandrel232. The method comprises a step (not shown inFIGS.11A to11H) of perforating a flattened portion of the strip236, upstream from the mandrel232, to form the first26B and the second28B perforation lines on the shrink label20B, by means of the perforation means250reciprocating in the direction of the arrow252(shown inFIG.8). The method of the second embodiment differs from the method of the first embodiment, for example, in its step (FIG.11E) of advancing the cut strip along the mandrel232between the cutting step and the puncturing step. The length ‘L’ inFIG.8may correspond to the distance of said advancing. By doing so it is possible to puncture the shrink label20B close to its upper end without having to place the mechanism for the blade240so high up on the mandrel232that it risks interfering with the rotary cutter246. In the case that the cutting step is performed for each shrink label20B before the puncturing step, this reduces the chance of the presence of the puncture27B causing tension irregularities, which might lead to problems, in particular at the cutting step. Irregularities in the tension of the strip or shrink label can occur in particular in the case of a transverse puncture, even when a puncturing step does not coincide with a cutting step, and even when the rotary cutter and blade are far away from each other. Third Embodiment—Apparatus In the following an apparatus according to a third embodiment of the invention is described by comparison with the apparatus of the first embodiment. As shown schematically inFIG.12the apparatus330differs from the apparatus30of the first embodiment mainly in the following: The apparatus330is for making the third shrink label20B; instead of the longitudinally aligned blade40of the first embodiment, a blade340having a shape and orientation corresponding to the compound blade240ofFIG.9is provided. So the cutting edge of the blade340extends transversally to the longitudinal direction. The blade340is provided above the rotary cutter346, rather than below the rotary cutter as is the case in the first and second embodiments. Third Embodiment—Method In the following a method according to the third embodiment is described. The method is for making the third shrink label20B.FIGS.13A to13Erepresent stages of the method which are performed using the apparatus330. The method performs the following steps in a repeated cycle. Step A: As shown inFIG.13Aa strip336of tubular material is fed onto the mandrel332from the top portion of the mandrel332, for example by means of first roller pairs of the advancing means (not shown). The strip332may be provided from a roll of tubular material. Optionally the position inFIG.13Acan be at least partially achieved by cutting and removing a previously made shrink label from the strip. Step B: The strip336is fed (such as by continuing the feeding in Step A) until the free end333of a free-end portion334of the strip336reaches a predetermined position along the mandrel332, such as the bottom end of the mandrel332(FIG.13B). Thus a region of the strip336becomes an open region held open and/or supported by the mandrel332. The strip336has a puncture27B created in a puncturing step of a previous operation cycle. Step C: Subsequently a puncturing step is performed (FIG.13C), wherein the compound blade340, by reciprocating in the direction of the arrow342(FIG.12), penetrates the strip336and enters the recess344, producing a puncture27B. The extents of the puncture27Ba may be formed in correspondence with the extents of the blade340. In other words one transverse extent of the puncture27Ba may be formed by (or in correspondence with) the transverse extent of the blade340, and the other transverse extent of the puncture27Ba may be formed by (or in correspondence with) the other transverse extent of the blade340. Unpunctured portions of the strip336extend from the extents of the puncture27Ba in the circumferential direction. The strip336is cut about its periphery, optionally during the puncturing (FIG.13C). The rotary cutter346may be configured to start and optionally complete the cutting before the puncturing of the same step. For example the blade40punctures the length cut from the strip336. So the strip336is cut at a second distance from its free end333by the rotary cutter346which can reciprocate in the direction of the arrows343(FIG.12). The strip336is punctured above the cut, to form the puncture27Ba in the following shrink label. Step D: The rotary cutter346is retracted. The blade340is retracted (seeFIG.13D). The free-end portion334of the strip336forms an end portion of the shrink label20B. The free end333becomes an open end of the shrink label20B. Step E: Subsequently the shrink label20B is advanced (FIG.13E) in the direction of the arrow ofFIG.13Eso that it leaves the mandrel332from the mandrel's bottom end, such as by means of the second roller pairs of the advancing means (not shown). The advancing means may transfer the newly-made free-end portion348of the strip336to the predetermined position at the same time that, or shortly after, the shrink label20B is transferred from the mandrel332. Preferably the shrink label20B is transferred from the mandrel332to a receptacle (not shown inFIG.13A to13E), which is further preferably positioned under the mandrel332. A subsequent shrink label20B can be made from the remaining strip of tubular material336. In this case its puncture27Ba has already been made in the puncturing step (FIG.13C) described above. Since the method comprises performing the above steps in a repeated cycle, the shrink label20B is cut to length by the cutting of one cycle, and punctured by the puncturing of a previous cycle. In summary the shrink label20B is made from the strip of tubular material336by: feeding a free-end portion334of a given length of the strip onto a columnar mandrel332; puncturing the free-end portion334at a first distance from the strip's free end333, feeding the strip336further along the mandrel332, and cutting the free-end portion334at a second distance from the strip's free end333, the second distance being greater than the first distance. The blade340may be provided upstream of the rotary cutter346, with respect to the feeding direction. The method comprises a step (not shown inFIGS.13A to13E) of perforating a flattened portion of the strip336, upstream from the mandrel332, to form the first26B and the second28B perforation lines on the shrink label20B, by means of the perforation means350reciprocating in the direction of the arrow352. When the blade340is provided above (upstream of) the rotary cutter346, this can be advantageous when the space below (downstream of) the rotary cutter346is limited. It is easier to make a puncture27B closer to one end (e.g. the top end) of the shrink label20B compared to an arrangement wherein the blade340is placed downstream from, and in particular close to, the rotary cutter346. InFIGS.7,8,10A to10E,11A to11H,12and13A to13E, the parts of the mandrel that are surrounded by the strip are shown with broken lines. Fourth Embodiment—Apparatus In the following an apparatus according to a fourth embodiment is described. The apparatus430is for making the third shrink label20B. As shown schematically inFIGS.14and15, a flat portion of a strip436of tubular material provided from a roll453of tubular material is fed to a blade440, a free end portion434of the strip being fed first. The apparatus430is configured to control the feeding of the flattened strip436to the blade440(the feeding direction being downward inFIG.14). A view taken along the feeding direction of the apparatus430ofFIG.14is shown inFIG.15. The transversely extending blade440has a slitting portion440aand a perforating portion440b. The blade440is provided on a reciprocating mechanism configured to move the blade in the directions of the arrow442, so that the blade can pierce and retract from the strip436. A support member454on the opposite side of the strip436to the blade440is provided to support the strip at least while it is being pierced. The support member may comprise a recess (not shown) for receiving the blade440. Fourth Embodiment—Method In the following a method according to the fourth embodiment is described with reference toFIGS.14and15. The method is for making the third shrink label20B using the apparatus430of the fourth embodiment. A straight strip436of flattened tubular material is provided, preferably by being extended from a roll453of the tubular material, and fed (downwards inFIG.14) past the blade440which pierces both walls of the strip436in a single movement. The blade440pierces one (in particular only one) folded edge of the strip436. In this way one (in particular only one) puncture27B is created. So the other folded edge of the strip436in a width direction can be left unpunctured. It may be provided that the folded edges of the strip436are not punctured during said puncturing, thus creating two separate slits being spaced apart from the first end and from the second end. This can be advantageous when a bottle has two protrusions, with one protrusion to be inserted in each slit, such as in the case of a pump dispenser comprising a T-shaped plunger handle. The feeding of the strip436is preferably stopped when the puncturing is being performed. By alternating the puncturing steps with feeding steps, several evenly spaced punctures20B can be formed in the strip436. In a subsequent and not-shown step the strip436is cut to form a shrink label20B. Preferably said cutting is performed by feeding the strip436to an opening means, such as a mandrel known in the art, to cut the strip into several lengths, each length comprising one puncture27B, thus creating several opened shrink labels20B. A rotary cutter such as the rotary cutter46,246,346of one of the preceding embodiments may be used here; a cutter other than a rotary cutter may be used. The method comprises a step (not shown inFIGS.14and15) of perforating a flattened portion of the strip436, upstream or downstream from the puncturing, to form the first26B and the second28B perforation lines, by means of a perforation means corresponding to the perforation means50,250,350of the previously described embodiments. The first26B and the second28B perforation lines are provided on each shrink label20B. The perforation means and the blade440may be driven by a common mechanism. Instead of a linearly reciprocating blade440, the blade440may be provided on the periphery of a rotating wheel, the apparatus being configured to control the speed of rotation. A plurality of blades440may be arranged on the periphery of the wheel. The shrink label may be placed over the receptacle in accordance with the use described above. Shrink labels may be formed by sequentially cutting the strip at predetermined intervals. Furthermore a linear series of receptacles can be arranged, each receptacle being sequentially conveyed to a common position for receiving a shrink label. Each receptacle with a shrink label may be conveyed to a processing station (not shown) for shrinking, such as a heater for heat-shrinking. In this way a receptacle assembly is made by performing the method to make a shrink label followed by using the shrink label to cover a receptacle. The invention is not limited to puncturing at the mandrel and the puncturing means may be provided at any place where the flexible tubular material is open (e.g. unflattenned), or where the flexible tubular material is flat. Any opening may be performed by an opening device executed as a mandrel or in addition to a mandrel. The opening device may comprise a guide such as a plate-like guide, a guide of varying cross-section, or any a structural member that supports the inside of the strip. Alternatively or in addition the opening device may comprise a tunnel or passageway aligned in a feeding direction and having one or more porous inner surfaces connected to a vacuum; the wall of the strip is thus pulled apart by low air pressure as the strip is received by the opening device; the porous surfaces may be stationary or conveyable; even here the opening portion is formed by moving apart inner peripheral portions of the strip. At least part of the open portion of the strip may be spaced from the opening device. Alternatively or in addition the strip may be inflated with internal pressure. The puncturing means may comprise a punch having a circular, elliptical, or polygonal (e.g. square or rectangular) section. The puncturing may create a cutout as a puncture, such as by removing a portion of the tubular material, or by partially removing the portion so as to leave a flap of material. The puncture may have a predefined (e.g. non-zero) width. Puncturing may be understood to mean making a puncture. It may be provided that a single slit is formed, or multiple slits are formed, by making the puncture. The method and/or apparatus of the first embodiment may be modified by being provided with a transversely extending blade instead of the longitudinally extending blade40. The method and/or apparatus of the first embodiment may be modified by being provided with a compound blade corresponding to the arrangement inFIG.9, instead of the slitting blade40. The method and/or apparatus of the second to fourth embodiments may be modified by being provided with a longitudinally extending blade instead of the transversely extending blade240,340,440. The method and/or apparatus of the second to fourth embodiments may be modified by being provided with a slitting blade, being an exclusively slitting blade, or a perforating blade being an exclusively perforating blade. The method and/or apparatus of any of the embodiments described above may be modified by being provided a perforating blade, being an exclusively perforating blade, as the puncturing means. The method and/or apparatus of any of the embodiments described above may be modified to make the above-described modified shrink label by omitting perforating the strip to make the first26,26A,26B and/or the second28,28A,28B perforation lines. The method and/or apparatus of any of the embodiments described above may be modified by being providing the perforation means50,250,350at an open portion of the strip36,236,336. The method and/or apparatus of any of the embodiments described above, the shrink label20,20A,20B may be punctured more than once and/or have more than one puncture27,27A,27B. In the first to third embodiments the cutter is provided as a rotary cutter. The invention is not limited to this and the cutter may be provided as a different type of cutter such as a flat cutter. It may be provided that the cutter peripherally cuts a part of the tubular material that is not on the mandrel. For example the part may be upstream or downstream of the mandrel. The cutting means may be configured to cut the strip by peripherally perforating the strip and then tearing the perforation. In the foregoing embodiments the protrusion is formed as a trigger or nozzle. The protrusion may alternatively or in addition comprise a handle. For example a handle can be grabbed by the user more easily when the handle is not covered or not completely covered by a shrink label. An exposed protrusion may be advantageous for other functional reasons, e.g. to expose a visual mark. So the sleeve for covering a receptacle is made of flexible tubular material and is in particular made by a method described above, wherein the sleeve has a given length measured from a first end to a second end in a longitudinal direction and may comprise a longitudinally extending puncture; the puncture is spaced apart from the first end and from the second end. The punctured region may be formed by a slit-like opening. The sleeve may comprise least one perforation line in addition to the puncture. So the use of the sleeve to cover a receptacle, wherein the receptacle comprises a longitudinally extending body and a protrusion extending at least partially transversally to the longitudinal direction, comprises: positioning the sleeve around the receptacle so that the puncture is aligned in the peripheral direction with the protrusion, and shrinking the sleeve around at least a portion of the receptacle to insert the protrusion through an opening formed by the puncture. So because the puncture is aligned with the protrusion, the protrusion extends from the body through the opening formed from the puncture during shrinking. Making a sleeve according to the method followed by the use of the sleeve may be understood to be a method of making and using a sleeve. The opening device, sleeve, and receptacle may be coaxial during at least some steps, such as when positioning the sleeve about the receptacle. Thus the sleeve can be easily positioned about the receptacle at (essentially) the same time that it is transferred from the opening device. So the receptacle assembly comprises a receptacle and the sleeve, wherein the receptacle has a longitudinally extending body and a protrusion extending at least partially transversally to the longitudinal direction, the sleeve is in a shrunk state and covers at least a portion of the receptacle, and the protrusion extends through an opening formed by the puncture. The receptacle may comprise a body and a cap attached to the body, and further preferably the sleeve in the shrunk state covers at least a portion of the body and at least a portion of the cap. So it is less likely for the cap to unintentionally separate (e.g. unscrew) from the body. The receptacle may have a nozzle, optionally as part of the cap. After shrinking, the perforation line may extend over at least a portion of the cap. Removal of the sleeve near the cap is facilitated. The sleeve in the shrunk state may cover the nozzle. The receptacle assembly may be formed by the aforementioned use. The shrink label as a sleeve may comprise pages for displaying e.g. a user manual, such as for medicines etc. The sleeve may show a decoration, such as text or a design. The sleeve may have a packaging function. The sleeve may be only locally shrunk, such as for tamper evidence applications. The sleeve may have a single layer or multilayer (e.g. coextruded) composition. The sleeve may be provided as a full label or as a partial label, i.e. that covers only a portion of the receptacle, such as a portion of an upper and/or a portion of a lower part of the receptacle. The tubular material may comprise metal, such as a metal foil. The sleeve may have uniform thickness in a peripheral and/or a longitudinal direction. The tubular material may have, but is not limited to having, a circular or oval section; the tubular material may have a sectional shape conformable to the shape of any opening device or receptacle, with some oversize. The term “tubular” is understood to mean at least having an inner and an outer periphery. The tubular material may be foil-like and/or film-like. When a first perforation line is provided, it may extend around at least part of the periphery of the shrink label. The perforation lines and puncture may be sized to be adjacent to the corresponding features on the bottle taking into account a shrinkage of the shrink label. For example the trigger may be arranged to be adjacent to the third perforation line when the shrink label is in the shrunk state. A portion of the sleeve comprising the puncture and any first and second perforation line may be understood to be a leak protection portion of the sleeve. The first and second perforation lines are examples of a perforation line that is in addition to the puncture. In particular the first perforation line is an example of a peripheral perforation line; the second perforation line is an example of a longitudinal perforation line. A peripheral perforation line is understood to be a perforation line extending in an at least partially peripheral direction. A longitudinal perforation line is understood to be a perforation line extending in an at least partially longitudinal direction. The third perforation line is an example of a puncture being spaced from the sleeve ends, and may preferably extend at least partially longitudinally along at least parts of the upper portion. It may be understood that, for the case that the sleeve has a perforation line, the perforation line comprising or joining with a puncture spaced from the sleeve ends, the entire perforation line may be spaced from the sleeve ends and/or may extend over only a portion of the sleeve periphery. Such a perforation line may extend longitudinally. In foregoing embodiments the sleeve is formed as a shrink label. The sleeve may be changeable from an expanded state towards a shrunk state by applying energy, such as by any one or more of: UV-light, infra-red radiation, hot air, and steam. Alternatively or in addition to shrinking the sleeve may be contracted by mechanical fastening (e.g. ties or bands). Alternatively or in addition the sleeve may shrink by means of humidity change or by releasing elastic energy; for example the sleeve as a stretch sleeve may be elastically expanded (expanded state) while being placed over a receptacle, after which the elastic tension is released (shrunk state). This can be done using techniques known in the art, such as by an expandable and hollow transporting mandrel or by radially separable finger members. So the sleeve may be a label such as any or more of: a stretch label, a shrink label, and a shrink sticker. A bottle is an example of a receptacle which includes a body and may include a cap. Other examples of a receptacle include container, cup, bowl, and pot. The body may have an interior space and may have an opening. A receptacle having a sleeve applied to it may be called a receptacle assembly. The receptacle and receptacle assembly may be empty or may hold a product such as a liquid or a powder. Having shown and described various embodiments of the present invention, further adaptations of the methods and systems described herein may be accomplished by appropriate modifications by one of ordinary skill in the art without departing from the scope of the present invention. Several of such potential modifications have been mentioned, and others will be apparent to those skilled in the art. For instance, the examples, embodiments, geometrics, materials, dimensions, ratios, steps, and the like discussed above are illustrative and are not required. Accordingly, the scope of the present invention should be considered in terms of the claims and is understood not to be limited to the details of structure and operation shown and described in the specification and drawings. This application is based on a Luxembourg patent application (Luxembourg Patent Application No. LU101715) filed on Mar. 30, 2020, the contents of which are incorporated herein by reference. REFERENCE SIGNS 10,100. . . bottle (receptacle)12,120. . . body14,140. . . neck16,160. . . cap18,180. . . trigger (protrusion)19,190. . . nozzle (protrusion)20A,20B . . . shrink label (sleeve)22,22A,22B . . . lower portion24,24A,24B . . . upper portion26,26A,26B . . . first perforation line27,27A,27B,27Ba . . . third perforation line (puncture)271B . . . central portion272B,273B . . . end portion28,28A,28B . . . second perforation line29,29A,29B . . . opening30,230,330,430. . . apparatus32,232,332. . . mandrel (opening device)33,233,333. . . free end34,234,334,434. . . free-end portion36,236,336,436. . . strip of tubular material40,240,340,440. . . blade (puncturing means)240a,440a. . . slitting portion240b,240c,440b. . . perforating portion42,242,342,442. . . arrows indicating movement of blade43,243,343. . . arrows indicating movement of rotary cutter44,244,344. . . recess46,246,346. . . rotary cutter (cutting means)48,248,348. . . newly-made free-end portion50,250,350. . . perforation means52,252,352, . . . arrows indicating movement of perforation means453. . . roll of tubular material454. . . support member | 58,626 |
11858164 | DETAILED DESCRIPTION Next, the best mode for carrying out the present disclosure will be described using embodiments. A log processing apparatus1according to the present disclosure manufactures a veneer sheet having a predetermined thickness by rotating and cutting a log PW. As shown inFIG.1, the log processing apparatus1includes a lathe charger2, first and second loading conveyors4a,4barranged upstream (on the left side inFIG.1) of the lathe charger2in the transport direction of the log PW (in the right-left direction inFIG.1), a veneer lathe6arranged downstream (on the right side inFIG.1) of the lathe charger2in the transport direction of the log PW, and an electronic control unit8for controlling the entire apparatus1. The lathe charger2and the first and second loading conveyors4a,4bcorrespond to the “log feeding apparatus” of the present disclosure, and the veneer lathe6corresponds to the “processing machine” of the present disclosure, each being an example an implementation configuration. As shown inFIG.1, the lathe charger2according to an embodiment of the present disclosure mainly includes a frame10, a log rotation apparatus20supported by the frame10, a transport apparatus for temporary centering40supported by the frame10and located upstream of the log rotation apparatus20in the transport direction of the log PW, and a pendular transport apparatus50supported by the frame10and located downstream of the log rotation apparatus20in the transport direction of the log PW. The transport apparatus for temporary centering40corresponds to the “log feeding unit” of the present disclosure, and the pendular transport apparatus50corresponds to the “transport unit” of the present disclosure, each being an example an implementation configuration. As shown inFIG.1, the frame10includes a lower frame12and upper frames18,18located on the lower frame12. The lower frame12includes a front frame14located upstream in the direction along the horizontal direction among the transport directions of the log PW, a rear frame15located downstream in the direction along the horizontal direction among the transport directions of the log PW, and an intermediate frame16connecting between the front and rear frames14and15. The front and rear frames14and15includes, as shown inFIGS.2and3, bottom walls14aand15ato be set on a floor, a pair of vertical walls14b,14cand a pair of vertical walls15b,15cextending vertically from the front and rear frames14and15respectively, and is generally U-shaped when seen in the direction along the horizontal direction among the transport directions of the log PW. The vertical walls15b,15chave a height greater than the vertical walls14b,14c. On the upstream end surfaces of upper portions of the vertical walls14b,14cof the front frame14, a coupling beam13is located horizontally, as shown inFIGS.1and2, in the direction along the horizontal direction among the transport directions of the log PW. In other words, the coupling beam13joins between the vertical walls14b,14c. Sensors S1, S1are set on the coupling beam13, as shown inFIG.2, for sensing the log PW (only the sensor S1is shown inFIG.2). As shown inFIG.1, the sensors S1, S1are set between a first receiving position RP1and a first delivery position DP1. At the first receiving position RP1, the transport apparatus for temporary centering40receives the log PW from the second loading conveyor4b, and at the first delivery position DP1, the transport apparatus for temporary centering40delivers the log PW to later-described centering spindles24a,24bof the log rotation apparatus20. Note that the sensors S1, S1are set at a position where the optical axis of light to be emitted from the sensors S1, S1intersects with virtual vertical lines VVL, VVL passing through respective reference lines Bp, Bp set on later-described placing sections42,42. The sensors S1, S1are also set to face downward in the direction along the horizontal direction among the transport directions of the log PW. The sensors S1, S1is an example an implementation configuration corresponding to the “log detection sensor” of the present disclosure. As shown inFIG.2, an extension piece11extending toward the upstream in the direction along the horizontal direction among the transport directions of the log PW is integrally attached to a substantially central portion of the coupling beam13in the longitudinal direction. Sensors S2, S3for detecting the log PW are attached to the extension piece11. As shown inFIG.4, the extension piece11has a length that reaches the boundary between the first loading conveyor4aand the second loading conveyor4b. As shown inFIG.4, the sensor S2is disposed in the vicinity of the tip of the extension piece11so that the detection unit faces downward in the vertical direction. As such, delivery of the log PW from the first loading conveyor4ato the second loading conveyor4bis recognized when the sensor S2detects the log PW. The sensor S3is located closer to the coupling beam13than to the sensor S2so the detection unit faces in the direction orthogonal to the plane containing the placement surface of the second loading conveyor4bwhere the log PW is placed. By measuring the transport distance from the sensor S3starts to detect the log PW until it completes the detection, the diameter of the log PW in the transport direction of the log PW by the second loading conveyor4bcan be obtained. As shown inFIG.2, the intermediate frame16connects the upper portions of the vertical walls14b,14cof the front frame14to the upper portions of the vertical walls15b,15cof the rear frame15. As such, the lower frame12is generally U-shaped when viewed from the side (in the direction perpendicular to both the horizontal direction and the vertical direction in the transport direction of the log PW). The upper surface of the intermediate frame16is flush with the upper surfaces of the vertical walls14b,14cof the front frame14. As shown inFIG.2, rails R1are installed on the upper surface of the intermediate frame16. As shown inFIG.1, the rails R1extend from the vertical walls14b,14cof the front frame14to the downstream end of the intermediate frame16(connected portions with the vertical walls15b,15cof the rear frame15). That is, the rails R1extend in a direction along the horizontal direction in the transport direction of the log PW. Note that the vertical walls15b,15care higher than the vertical walls14b,14c, and thereby the upper surface of the intermediate frame16is lower than the upper surfaces of the vertical walls15b,15c. A sensor S4is attached to the intermediate frames16as shown inFIG.1. The sensor S4is a sensor for detecting that the shaft bearing housings22a,22bto be described later have moved a predetermined distance downstream in the transport direction of the log PW. The sensor S4is located at a position downstream of the second receiving position RP2where the centering spindles24a,24breceive the log from the transport apparatus for temporary centering40in the transport direction of the log PW, the position being close to the later-described rails R1. The predetermined distance according to the present embodiment of the present disclosure is set as a value slightly greater than the assumed maximum diameter among diameters of logs PW to be fed to the log processing apparatus1. The position away from the second receiving position RP2by a predetermined distance downstream in the transport direction of the log PW is set to be a second delivery position DP2where the log PW is delivered from the centering spindles24a,24bto the clamping arms56,56(to be described later) of the pendular transport apparatus50. In the present embodiment, when the sensor S4starts to detect the shaft bearing housings22a,22band completes the detection, it is determined that the shaft bearing housings22a,22bhave moved the predetermined distance. The upper frames18,18are generally U-shaped when the lower frame12is viewed from the side (in the direction perpendicular to both the horizontal direction and the vertical direction in the transport direction of the log PW). As shown inFIG.2, one ends of the upper frames18,18are integrally connected to the upper surfaces of the vertical walls14b,14cof the front frame14, and the other ends are integrally connected to the upper surfaces of the vertical walls15b,15cof the rear frame15. The ends of the upper frames18,18on one side are disposed on the uppermost stream side of the upper surfaces of the vertical walls14b,14cin the direction along the horizontal direction in the transport direction of the log PW. As shown inFIGS.1to4, a coupling beam17is horizontally attached to a substantially intermediate portion in the height direction of vertical column portions18a,18aof the upper frames18,18. In other words, the vertical column portions18aand18aare connected by the coupling beam17. The coupling beam17is attached to the vertical column portions18aand18ain a state where a normal line of a mounting surface of a later-described laser measuring instrument17aof the coupling beam17is inclined with respect to the vertical direction. More specifically, the coupling beam17is inclined such that the upper mounting surface of the coupling beam17faces downstream in the direction along the horizontal direction among the transport directions of the log PW. A plurality of laser measuring instruments17afor measuring the shape of the log PW are installed on the upper mounting surface of the coupling beam17. As shown inFIG.3, the plurality of laser measuring instruments17a are arranged at equal intervals along the longitudinal direction of the coupling beam17. The inclination angle of the coupling beam17with respect to the vertical column portions18aand18ais set such that, when the laser measuring instruments17aare installed on the coupling beam17, laser beams emitted from the laser measuring instruments17aare orthogonal to the rotation axis center line of the centering spindles24a,24b, with the later-described shaft bearing housings22a,22bof the log rotation apparatus20being located at the second receiving position RP2. As shown inFIG.2, the log rotation apparatus20includes shaft bearing housings22a,22barranged on the rails R1, and centering spindles24a,24bsupported by the shaft bearing housings22a,22bso as to be rotatable and slidable in the axial center line direction, a motor M1connected to the centering spindle24avia a timing belt (not shown), fluid cylinders CL1a, CL1bhaving respectively cylinder rods (not shown) connected to one ends of the centering spindles24a,24bin the axial center line direction, and fluid cylinders CL2a, CL2b(only the fluid cylinder CL2bis shown inFIG.2) having cylinder rods (not shown) connected to shaft bearing housings22a,22b. The shaft bearing housings22a,22bcorrespond to the “centering unit” in the present disclosure, and the centering spindles24a,24bcorrespond to the “first centering spindle” and the “second centering spindle” respectively in the present disclosure, each being an example an implementation configuration. The fluid cylinders CL2a, CL2bare examples of an implementation configuration corresponding to the “drive unit” in the present disclosure. As shown inFIG.4, the shaft bearing housings22a,22bhave guided sliding portions23aand23bthat engage with the rails R1respectively. As shown inFIG.1, the shaft bearing housings22a,22bare reciprocated between the second receiving position RP2and the second delivery position DP2on the rails R1by the fluid cylinders CL2a, CL2brespectively. Here, as shown inFIG.4, the second receiving position RP2is defined as the position where the rotation axis center lines of the centering spindles24a,24bintersects the virtual vertical line VVL passing through the reference line Bp set on placing sections42,42(to be described later) of the transport apparatus for temporary centering40. The second receiving position RP2is also defined as a position where the centering spindles24a,24breceive the log PW from the transport apparatus for temporary centering40. Further, the second delivery position DP2is defined as a position located on the virtual vertical line VVL side, that is, downstream from the second receiving position RP2in the transport direction of the log PW by a distance slightly greater than the assumed maximum diameter among the diameters of the log PW that is supplied to the log processing apparatus1according to an embodiment of the present disclosure. The second delivery position DP2is also defined as a position where the log PW is delivered from the centering spindles24a,24bto the later described clamping arms56,56of the pendular transport apparatus50. The second receiving position RP2corresponds to the “second receiving position” and the “receiving position” in the present disclosure, and the second delivery position DP2is an example of an implementation configuration corresponding to the “second delivery position” and the “delivery position” in the present disclosure. The reference line Bp is an example of an implementation configuration corresponding to the “reference portion” in the present disclosure. As shown inFIGS.2and3, the centering spindles24a,24bare supported by the shaft bearing housings22a,22bin a state of facing each other, and each have a chuck (not shown) to hold the log PW between them at the cut end faces (both end faces of the log PW in the longitudinal direction). The centering spindles24a,24bare reciprocated in the axial center line direction by the fluid cylinders CL1aand CL1b. The log PW can be clamped by movement of the centering spindles24a,24btoward each other, and the clamping of the log PW can be released by movement of the centering spindles24a,24baway from each other. In the present embodiment, only the centering spindle24ais rotationally driven by the motor M1. While the log PW is held between the centering spindles24a,24b, when the centering spindle24ais rotated by the motor M1, the centering spindles24a,24band the log PW rotate integrally. The motor M1has a rotary encoder (not shown) which enables detection of a rotation angle of the centering spindle24a, that is, the rotation angle of the log PW. This makes it possible to control the position of the log PW at a desired rotation angle. As shown inFIG.4, the transport apparatus for temporary centering40is disposed downward in the vertical direction of the log rotation apparatus20. More specifically, the transport apparatus for temporary centering40is disposed directly below the shaft bearing housings22a,22bdisposed at the second receiving position RP2. The configuration in which the transport apparatus for temporary centering40is disposed downward in the vertical direction of the log rotation apparatus20can well prevent the log processing apparatus1from increasing the size thereof in the transport direction of the log PW. As shown inFIG.3, the transport apparatus for temporary centering40includes: placing sections42,42where the log PW loaded in from the second loading conveyor4bis received and placed; male thread rods44,44supported by the vertical walls14b,14cof the front frame14so as to extend in the vertical direction and threadedly engaged with the placing sections42,42; and motors M2, M2connected to lower ends of the male thread rods44,44. As the motors M2and M2rotate the male thread rods44,44forward and backward, the placing sections42,42are reciprocated between the first receiving position RP1and the first delivery position DP1. The motors M2and M2have a rotary encoder (not shown) and can detect the amount of movement of the placing sections42,42in the vertical direction. Accordingly, the placing sections42,42, that is, the log PW can be controlled to a desired vertical position. As shown inFIG.4, the placing sections42,42have substantially V-shaped placement surfaces42a,42athat open upward in the vertical direction, and the log PW is brought into contact with the placement surfaces42a,42ato be held. In the present embodiment, the intersection line of two planes constituting the V-shape of the placement surfaces42a,42ais used as a reference line Bp for obtaining a temporary rotation axis center line of the log PW which is described later. The placing sections42,42have guided sliding portions43,43that engage with rails R2disposed on the vertical walls14b,14cof the front frame14in the vertical direction. Thereby, the stability when the placing sections42,42are reciprocated in the vertical direction is improved. As shown inFIG.3, the pendular transport apparatus50includes: a long rotary frame52that is rotatably supported by the upper frames18,18; holders54,54that are attached to the rotary frame52integrally and rotatably and also slidably in the longitudinal direction of the rotary frame52; and clamping arms56,56slidably supported by the holders54,54. The clamping arms56,56are an example of an implementation configuration corresponding to the “first clamping arm” and the “second clamping arm” in the present disclosure. As shown inFIG.3, the rotary frame52has rotation shafts52a,52aat both ends in the longitudinal direction, and is supported by the shaft receiving houses53,53, where the rotation shafts52a,52aare fixed to the upper surfaces of the upper frames18,18. As shown inFIG.1, the rotation axis center lines of the rotation shafts52a,52aare aligned between the rotation axis center line of the centering spindles24a,24bat the second delivery position DP2and the rotation axis center line of later-described cutting spindles72a,72aof the veneer lathe6, in the direction along the horizontal direction among the transport directions of the log PW. A rotation shaft (not shown) of the motor M3is connected to the shaft end portion of the one rotation shaft52a, and the rotary frame52is rotated as the motor M3is driven. The motor M3has a rotary encoder (not shown) and can detect the rotation angle of the rotary frame52. Thereby, the position of the rotary frame52can be controlled to a desired rotation angle. Further, as shown inFIG.3, the rotary frame52has rails R3and R3on the lower surfaces (lower surfaces inFIGS.2and3) of both end portions other than the central portion in the longitudinal direction. The rails R3and R3extend along the longitudinal direction of the rotary frame52. Further, as shown inFIGS.2and3, the rotary frame52has a support wall52bfor supporting the fluid cylinders CL3a, CL3bon the lower surface (the lower surface inFIGS.2and3) that is located generally in the center in the longitudinal direction. The support wall52bprotrudes in the vertical direction with respect to the lower surface of the rotary frame52. Note that the fluid cylinders CL3a, CL3bare supported by the support wall52bso that the axial center line direction thereof is parallel to the longitudinal direction of the rotary frame52. The distal ends of the cylinder rods of the fluid cylinders CL3a, CL3bare connected to holders54,54, respectively. As shown inFIG.3, the holders54,54have sliding portions54a,54awith guides on the upper surfaces, and are slidably supported in the longitudinal direction of the rotary frame52by engaging the guided sliding portions54a,54awith the rails R3, R3of the rotary frame52. The holders54,54have rails R4, R4that extend in a direction perpendicular to the upper surfaces of the holders54,54, respectively. As shown inFIG.3, the clamping arms56,56have guided sliding portions56a,56a, and are slidably supported by engaging the guided sliding portions56a,56awith the rails R4and R4of the holders54,54. The clamping arms56,56are connected to motors M4and M4fixed to the holders54,54, and slide on the rails R4and R4by driving the motors M4and M4. More specifically, the clamping arms56,56have a female thread portions (not shown) and are slid back and forth along the rails R4and R4when the motors M4and M4rotate the male thread rod (not shown) forward and backward. The clamping arms56,56slide in a direction perpendicular to the upper surfaces of the holders54,54, that is, the lower surface of the rotary frame52. Further, the clamping arms56,56have claws56b,56bat the distal end portions for holding the log PW at the cut surfaces (both end surfaces in the longitudinal direction) of the log PW. The clamping arms56,56supported by the holders54,54in this way can swing together with the holders54,54around the rotation shafts52a,52aas the rotary frame52rotates. The clamping arms56,56also are slidable back and forth with respect to the holders54,54in the direction approaching and moving away from the rotation shafts52a,52a. Note that the motors M4and M4have a rotary encoder (not shown) and can control the position of a rotation shaft (not shown) of the motors M4and M4, that is, a male thread rod (not shown) to a desired rotation angle. As a result, the clamping arms56,56can be each controlled to a desired position. As shown inFIG.1, the first and second loading conveyors4a,4bare constituted as chain conveyors to load in the log PW to the transport apparatus for temporary centering40by winding an endless annular chain CH around a pair of sprockets62,62, rotating one of the sprockets62by a motor (not shown), and moving the chain CH in the rotation direction of the sprockets62,62. As shown inFIG.4, the first loading conveyor4ais installed so that the placement surface where the log PW is placed is parallel to the floor surface. The second loading conveyor4bhas a length between the downstream end portion of the first loading conveyor4aand the transport apparatus for temporary centering40. The second loading conveyor4bis also installed to be inclined upward from the first-loading-conveyor-4aside thereof toward the transport apparatus for temporary centering40. Specifically, the sprocket62on the first-loading-conveyor-4aside of the second loading conveyor4bis located lower than the sprocket62of the first loading conveyor4a, and the sprocket62on the transport-apparatus-for-temporary centering40side of the second loading conveyor4bis located higher than the placement surfaces42a,42a(seeFIG.4) of the placing sections42,42of the transport apparatus for temporary centering40at the first receiving position RP1. The chain CH of the second loading conveyor4bhas a plurality of claws64. The plurality of claws64prevent the log PW from dropping off from the second loading conveyor4bwhile the second loading conveyor4bis transporting the log PW. The motor (not shown) for rotating the sprocket62of the second loading conveyor4bhas a rotary encoder (not shown), and thereby the position of the log PW can be controlled to a desired position, and the transport distance of the log PW can be calculated by counting pulses output from the rotary encoder. As shown inFIGS.1to3, the veneer lathe6includes: cutting spindles72a,72brotatably supported by vertical walls15b,15cof the rear frame15; fluid cylinders CL4a, CL4bhaving cylinder rods (not shown) attached to the vertical walls15b,15cand connected to axial ends of the cutting spindles72a,72bon one side thereof; and a knife74disposed on the rear frame15so as to be movable forward and backward toward the log PW held between the cutting spindles72a,72b. The cutting spindles72a,72bcorrespond to the “first cutting spindle” and the “second cutting spindle” in the present disclosure, and the knife74is an example of an implementation configuration corresponding to the “blade” in the present disclosure. As shown inFIGS.2and3, the cutting spindles72a,72bare supported by the vertical walls15b,15cso as to face each other and also to be parallel to the centering spindles24a,24b. Further, the cutting spindles72a,72bhave chucks (not shown) at the distal end portions for holding the log PW at the cut surfaces (both end surfaces in the longitudinal direction) of the log PW. The cutting spindles72a,72bare reciprocated in the axial center line direction by the fluid cylinders CL4aand CL4b. The cutting spindles72a,72bare moved in a direction approaching each other to hold the log PW at the cut surfaces (both end surfaces in the longitudinal direction), and the cutting spindles72a,72bare moved in a direction away from each other to disengage the holding of the log PW at the cut surfaces (both end surfaces in the longitudinal direction). In the present embodiment, only the cutting spindle72ais driven to rotate by a motor (not shown), and the cut surfaces (both end surfaces in the longitudinal direction) of the log PW are held between the cutting spindles72a,72b. Thus, when the cutting spindle72ais rotationally driven by a motor (not shown), the cutting spindles72a,72band the log PW rotate integrally. The knife74is attached to a knife carriage (not shown) that is reciprocally movable in the horizontal direction with respect to the rear frame15. A veneer sheet having a desired thickness is peeled off from the log PW by approaching the knife carriage to the log PW held between the cutting spindles72a,72bat a predetermined speed. The electronic control unit8is configured as a microprocessor centered on a CPU. In addition to the CPU, the electronic control unit8includes a ROM for storing processing programs, a RAM for temporarily storing data, an input/output port, and a communication port. The electronic control unit8receives, through the input port, detection signals from sensors S1, S2, and S3that detect the log PW, detection signals from the sensor S4that detects that the shaft bearing housings22a,22bhave reached the second delivery position DP2, a distance to the outer surface of the log PW from the laser measuring instruments17a, and pulses from the motors M1, M2, M3, and M4and rotary encoders (not shown) of the motors. The electronic control unit8outputs, through the output port, driving signals to the first and second loading conveyors4a,4b, driving signals to the fluid cylinders CL1a, CL1b, CL2a, CL2b, CL3a, CL3b, CL4a, and CL4b, driving signals to the motors M1, M2, M3, and M4and other motors (not shown), and driving signals to the knife carriage (not shown). Next, the operation of the log processing apparatus1configured as described above, particularly the operation when a log is fed to the veneer lathe6by the lathe charger2will be described.FIG.5is a flowchart illustrating an example of a processing routine for driving a first loading conveyor that is executed by an electronic control unit8of the log processing apparatus1according to an embodiment of the present disclosure.FIG.6is a flowchart illustrating an example of a processing routine for driving a second loading conveyor that is executed by an electronic control unit8of the log processing apparatus1according to an embodiment of the present disclosure.FIGS.7and8are each a flowchart illustrating an example of a processing routine for driving a transport apparatus to obtain a temporary centering, the routine being executed by an electronic control unit8of the log processing apparatus1according to an embodiment of the present disclosure.FIGS.9and10are each a flowchart illustrating an example of a processing routine for driving a log rotation apparatus executed by the electronic control unit8of the log processing apparatus1according to an embodiment of the present disclosure.FIG.11is a flowchart illustrating an example of a processing routine for driving a pendular transport apparatus executed by the electronic control unit8of the log processing apparatus1according to an embodiment of the present disclosure. Note that the process of driving the first loading conveyor, the process of driving the second loading conveyor, the process of driving the log rotation apparatus, and the process of driving the pendular transport apparatus are simultaneously executed in parallel. For ease of description, the process of driving the first loading conveyor, the process of driving the second loading conveyor, the process of driving the log rotation apparatus, and the process of driving the pendular transport apparatus will be described in this order. [Process of Driving the First Loading Conveyor] In the process of driving the first loading conveyor, first, the CPU of the electronic control unit8executes a process of determining whether or not a transport flag Fs is 0 (Step S100). The transport flag Fs of the second loading conveyor is set by a later-described process of the present routine: it is set to be 1 when the sensor S2is turned on, that is a log PW is loaded in onto the second loading conveyor4b; and it is set to be 0 when the second loading conveyor4bhas completed the transport of the log PW and is ready to accept a new log PW. When the transport flag Fs is 0, that is, when the second loading conveyor4bis ready to accept a new log PW, the process of driving the first loading conveyor4ais executed to load in a log PW to the second loading conveyor4b(Step S102). When it is determined the transport flag Fs is 0 in Step S100, that is, when a log PW has been loaded in onto the second loading conveyor4b, the present routine ends without doing anything. Subsequently, a process of determining whether or not the sensor S2is turned on is executed (Step S104). When the sensor S2is turned on (seeFIG.12), the second loading conveyor transport flag Fs is set to 1 (Step S106), the first loading conveyor4ais stopped (Step S108), and the present routine ends. When it is determined in Step S104that the sensor S2is not turned on, the processes in steps S102to S104are repeatedly executed until the sensor S2is turned on. [Process of Driving the Second Loading Conveyor] In the process of driving the second loading conveyor, the CPU of the electronic control unit8executes a process of determining whether or not the transport flag Fs is 1 (Step S200). When the transport flag Fs is 1, that is a log PW has loaded in onto the second loading conveyor4b, a process of determining whether or not a first receiving position setting flag Frp1is 1 (Step S202). The first receiving position setting flag Frp1is set in the process of a routine for driving the transport apparatus for temporary centering which is described later: it is set to 1 when the placing sections42,42of the transport apparatus for temporary centering40are at the first receiving position RP1; and it is set to 0 when the placing sections42,42of the transport apparatus for temporary centering40leaves the first receiving position RP1. When the first receiving position setting flag Frp1is 1, that is, when the placing sections42,42of the transport apparatus for temporary centering40are at the first receiving position RP1, a process of determining whether or not a loading completion flag Fvv1is 0 (Step S204). The loading completion flag Fvv1is set in the process of the present routine which is described later: it is set to be 1 when the second loading conveyor4bhas completed loading of the log PW so that the placing sections42,42of the transport apparatus for temporary centering40can transport it, that is the center point of the temporary diameter of the log PW has reached the virtual vertical line VVL which will be describe later; and it is set to be 0 otherwise. When it is determined that the loading completion flag Fvv1is 0, that is, when the center point of the temporary diameter of the log PW has not reached the virtual vertical line VVL to be described later, in other words, when the second loading conveyor4bhas not completed loading of the log PW so that the placing sections42,42of transport apparatus for temporary centering40can transport it, a process of driving the second loading conveyor4bis executed (Step S206), and a process of determining whether or not the sensor S3is turned on (Step S208). When the sensor S3is turned on (seeFIG.13), a process of starting measurement of the temporary diameter of the log PW is executed (Step S210). The “starting measurement of the temporary diameter of the log PW” in the present embodiment means that pulse integration is started, the pulse being to be output by a rotary encoder (not shown) of a motor (not shown) that drives the second loading conveyor4b. The temporary diameter of the log PW is defined as the diameter of the log PW transported by the second loading conveyor4bin the direction along the transport direction. Subsequently, a process of determining whether or not the sensor S3is turned off is executed (Step S212). When the sensor S3is turned off (seeFIG.14), a process of completing the measurement of the temporary diameter of the log PW is executed (Step S214). The “completing the measurement of the temporary diameter of the log PW” in this embodiment means the pulses integration ends, the pulses being to be output by a rotary encoder (not shown) of a motor (not shown) that drives the second loading conveyor4b. When it is determined in Step S212that the sensor S3is not turned off, the process of Step S212is repeatedly executed until the sensor S3is turned off. A processing is executed to calculate the temporary diameter of the log PW is calculated using the integrated value of pulses from the turning on of the sensor S3to the turning off therefrom and to calculate the distance L1from the center point of the temporary diameter of the log PW to the virtual vertical line VVL (Step S216). It is then determined whether or not the center point of the temporary diameter of the log PW has reached the virtual vertical line VVL (Step S218). The determination determined whether or not the center point of the temporary diameter of the log PW has reached the virtual vertical line VVL can be performed by integrating the pulses since the sensor S3has turned off and determining whether or not the travel distance of the log PW reached the distance L1, the travel distance being calculated by using the integrated pause value. When the center point of the temporary diameter of the log PW reaches the virtual vertical line VVL (seeFIG.16), the loading completion flag Fvv1is set to 1 (Step S220), driving of the second loading conveyor4bis stopped (Step S222), and the present routine ends. In Step S218, when it is determined that the center point of the temporary diameter of the log PW has not reached the virtual vertical line VVL yet, Step S218is repeatedly executed until the center point of the temporary diameter of the log PW reaches the virtual vertical line VVL. When the center point of the temporary diameter of the log PW reaches the virtual vertical line VVL (seeFIG.16), the loading completion flag Fvv1is set to 1 (Step S220), driving of the second loading conveyor4bis stopped (Step S222), and the present routine ends. When it is determined in Step S200that the transport flag Fs of the second loading conveyor is not 1, or when it is determined in Step S202that the first receiving position setting flag Frp1is not 1, when it is determined that the loading completion flag Fvv1is not 0 in Step S204, when it is determined in Step S208that the sensor S3is not turned on, the present routine ends without doing anything. Here, the electronic control unit8that executes the processing routine for driving the second loading conveyor is an example of an implementation configuration corresponding to the “control unit” in the present disclosure. [Process of Driving the Transport Apparatus for Temporary Centering] In the process of driving the transport apparatus for temporary centering, the CPU of the electronic control unit8executes a process to determine whether or not the first receiving position setting flag Frp1is 1 (Step S300), and when the first receiving position setting flag Frp1is 1, that is, when the placing sections42,42of the transport apparatus for temporary centering40are at the first receiving position RP1, the CPU executes a process to determine whether or not the loading completion flag Fvv1is 1 (Step S302). When the loading completion flag Fvv1is 1, that is, when the center point of the temporary diameter of the log PW has reached the virtual vertical line VVL (seeFIG.16), the CPU executes a process to determine whether or not a centering spindle holding flag Fcs is 1 (Step S304). Here the centering spindle holding flag Fcs is set in a process of driving the log rotation apparatus which is described later, and is set to 1 when the log PW is held at the cut surfaces (both end surfaces in the longitudinal direction) between the centering spindles24a,24b, and is set to 0 when the holding is released. When the centering spindle holding flag Fcs is 1, that is, when the log PW is held at the cut surfaces (both end surfaces in the longitudinal direction) between the centering spindles24a,24b, the CPU executes a process to determine whether or not a second delivering position setting flag Frp2is 1 (Step S306). Here the second delivering position setting flag Frp2is set in a process of driving the log rotation apparatus which is described later, and is set to 1 when the shaft bearing housings22a,22bof the log rotation apparatus20has reached the second delivery position DP2where the centering spindles24a,24bdelivers the log PW to the clamping arms56,56, of the pendular transport apparatus50, and is set to 0 when the shaft bearing housings22a,22bleaves the second delivery position DP2. When it is determined that the centering spindle holding flag Fcs is not 1 in Step S304, that is, when the holding of the log PW at the cut surfaces (both end surfaces in the longitudinal direction) between the centering spindles24a,24bis released, or when it is determined that the second delivery position setting flag Fdp2is set to 1 in Step S306, that is when the shaft bearing housings22a,22bof the log rotation apparatus20has reached the second delivery position DP2, that is, when the centering spindles24a,24b(the shaft bearing housings22a,22b) holding the log PW at the cut surfaces (both end surfaces in the longitudinal direction) therebetween is away from the second receiving position RP2by a distance equal to or greater than the assumed maximum diameter of the diameters of the log PW to be fed to the log processing apparatus1, a process is executed for raising the placing sections42,42of the transport apparatus for temporary centering40and resetting the first receiving position setting flag Frp1and the loading completion flag Fvv1to be 0 (Step S308). In other words, the placing sections42,42of the transport apparatus for temporary centering40that has the log PW thereon are raised only when the raising of the placing sections42,42does not causes the log PW under transportation by the placing sections42,42to be brought in contact with the log PW under transportation by the shaft bearing housings22a,22bof the log rotation apparatus20. In this way, in the present embodiment, the log PW is transported to the first delivery position DP1only based on the determination whether or not the log PW under transportation by the placing sections42,42is brought in contact with the log PW under transportation by the shaft bearing housings22a,22beven when the placing sections42,42having the log PW are raised before the shaft bearing housings22a,22bof the log rotation apparatus20(the centering spindles24a,24b) reaches the second receiving position RP2. As a result, the time required to transport the log PW to the cutting spindles72a,72bcan be reduced. Subsequently, a process of determining whether or not the sensors S1, S1are turned on is executed (Step S310). When the sensors S1, S1are turned on (seeFIG.17), a process of calculating the distance L2is executed (Step S312), the distance L2being the distance required to move the placing sections42,42to the first delivery position DP1, that is, the distance by which the placing sections42,42need to be moved to align the temporary rotating axis center line of the log PW located at the first receiving position RP1with the rotation axis center lines of the centering spindles24a,24bof the log rotation apparatus20located at the second receiving position RP2. If it is determined in Step S310that the sensors S1, S1are not turned on, the process of Step S310is repeatedly executed until the sensors S1, S1are turned on, and when the sensors S1, S1are turned on (seeFIG.17), the process of calculating the distance L2necessary for moving the placing sections42,42to the first delivery position DP1is executed (Step S312). Here, in this embodiment, the distance L2is calculated by integrating the pulses output from the rotary encoders of the motors M2and M2that reciprocate the placing sections42,42, and using the integrated value of the pulses. Specifically, as shown inFIG.15, a height Hs1in the vertical direction from the reference line Bp set on the placing sections42,42at the first receiving position RP1to the sensors S1, S1, a height Hss in the vertical direction from the sensors S1, S1to the rotation axis center line of the centering spindles24a,24bwhen the shaft bearing housings22a,22bare in the second receiving position RP2, and an opening angle2θ between the placement surfaces42a,42aof the placing sections42,42are measured in advance and stored in the ROM of the electronic control unit8. When the log PW is detected by the sensors S1, S1, a height Hbp by which the reference line Bp has moved is calculated, and the distance L2is obtained using the equations (1) and (3). Here in this embodiment, the height Hbp is obtained by using an integrated value of pulses output from the rotary encoders of the motors M2and M2until the log PW is detected by the sensors S1, S1after the placing sections42,42start to rise. In the equations, r is the radius of the virtual circle VC that is in contact with the two placement surfaces42a,42aand the optical axis of the light emitted from the sensors S1, S1, and is defined as the virtual radius of the log PW; and Hbc is the height from the reference line Bp on the placing sections42,42to the temporary rotation axis center line of the log PW when the sensors S1, S1detect the log PW. Here, the height Hs1corresponds to the “sixth distance” in the present disclosure, the height Hss corresponds to the “seventh distance” in the present disclosure, and the opening angle2θ corresponds to the “geometric shape of the placing section” in the present disclosure. The height Hbp corresponds to the “movement amount” in the present disclosure, and the distance L2is an example of an implementation configuration corresponding to the “eighth distance” in the present disclosure. An embodiment in which the distance L2is calculated using the equations (1) to (3) is an example of the implementation structure corresponding to “the controller calculates a virtual radius of the log along the direction from the first receiving position to the first delivery position, using a sixth distance from the reference portion to the log detection sensor when the log feeding unit is at the first receiving position, a displacement of the reference portion when the log feeding unit moves from the first receiving position to the position where the log is detected by the log detection sensor, and the geometric shape of the placing section, and the controller further calculates an eighth distance from the sum of the virtual radius and a seventh distance to the rotation axis center line of the first and second centering spindles when the centering unit is at the second receiving position, so as to cause the log feeding unit to move by the eighth distance after the log detection sensor has detected the log” of the present disclosure. [Mathematical 1] L2=r+Hss(1) r=Hbc·cos θ (2) Hbc=Hs1−Hbp−r(3) Thus, when the distance L2is obtained, a process of determining whether or not the placing sections42,42have reached the first delivery position DP1, that is, the placing sections42,42has been raised so that the temporary center axis of the log PW has been aligned with the center axis center line of the centering spindles24a,24b(Step S314). When the placing sections42,42have reached the first delivery position DP1, that is, the placing sections42,42have been raised so that the temporary center axis center line of the log PW has been aligned with the center axis center line of the centering spindles24a,24b, driving of the placing sections42,42is stopped (Step S316). In Step S314, when it is determined that the placing sections42,42have not reached the first delivery position DP1, that is, the placing sections42,42have not been raised so that the temporary center axis center line of the log PW has been aligned with the center axis center line of the centering spindles24a,24b, Step S314is repeatedly executed until the placing sections42,42reach the first delivery position DP1, that is, until the placing sections42,42have been raised so that the temporary center axis center line of the log PW has been aligned with the center axis center line of the centering spindles24a,24b, driving of the placing sections42,42is stopped (Step S316). Then, the first delivery position setting flag Fdp1is set to 1, and an elapsed descending time flag Ftd is set to 0 (Step S318). Here, the first delivery position setting flag Fdp1is set to 1 when the placing sections42,42reaches the first delivery position DP1(seeFIG.18), that is, when the placing sections42,42have been raised so that the temporary center axis center line of the log PW has been aligned with the center axis center line of the centering spindles24a,24b, otherwise it is set to 0. The elapsed descending time flag Ftd is set in the process of the present routine which is described later: it is set to 1 when a predetermined period of time Td* has elapsed since the placing sections42,42have been driven toward the first receiving position RP1; and it is set to 0 when placing sections42,42have been reached the first delivery position DP1. In this way, in the present embodiment, before a later-described process of measuring the cutting axis center line of the log PW by the log rotation apparatus20, that is, before the log PW is delivered to the centering spindles24a,24b, the temporary rotation axis center line of the log PW is obtained, and the temporary rotation axis center line is aligned with the rotation axis center line of the centering spindles24a,24b. Thus, when both cut faces (both end surfaces in the longitudinal direction) of the log PW are held between the centering spindles24a,24b, the deviation (deviation in axial centerline) between the cutting axis center line of the log PW and the rotation axis center line of the centering spindles24a,24bcan be reduced. Thereby, it is possible to decrease the swinging of the log PW during the rotation of the log PW (Step S410) by the centering spindles24a,24bexecuted in the processing routine of driving the log rotation apparatus, which is described later. Note that the cutting axis center line of the log PW is defined as the rotation axis center line of the log PW when both cut faces (both end surfaces in the longitudinal direction) of the log PW are held between the cutting spindles72a,72b. Subsequently, a process of determining whether or not the centering spindle holding flag Fcs is 1 is executed (Step S320). When it is determined that the centering spindle holding flag Fcs is not 1, that is, when the centering spindles24a,24bhas released the cut ends (both end surfaces in the longitudinal direction) of the log PW, the process of Step S320is repeatedly executed until the centering spindle holding flag Fcs is tuned to 1, that is, until the cut ends (both end surfaces in the longitudinal direction) of the log PW are held between the centering spindles24a,24b. When the centering spindle holding flag Fcs is 1, a processing is executed for driving the placing sections42,42to return to the first receiving position RP1(seeFIG.19), and for counting the elapsed descending time Td from the driving toward the first receiving position RP1using a timer (not shown) (Step S322). Next, the set flag Fdp1of the first delivery position is reset to 0 (Step S324), and a process of determining whether or not the elapsed descending time Td has reached the predetermined period of time Td* is executed (Step S326). Here the predetermined period of time Td* is set as a period of time in which the placing sections42,42could be descended to a position where the log PW if rotated does not interfere with the placing sections42,42. When the elapsed descending time Td has reached the predetermined period of time Td*, that is, when the placing sections42,42were descended to a position where the log PW if rotated does not interfere with the placing sections42,42, the elapsed descending time flag Ftd is set to 1 (Step S328). When it is determined that the elapsed descending time Td has not reached the predetermined period of time Td*, that is, when the placing sections42,42have not yet descended to a position where the log PW if rotated does not interfere with the placing sections42,42, the process of Step S326is repeatedly executed until the elapsed descending time Td reaches the predetermined period of time Td*, and when the elapsed descending time Td reaches the predetermined period of time Td*, the elapsed descending time flag Ftd is set to 1 (Step S328). Then, the process of determining whether or not the placing sections42,42have reached the first receiving position RP1is executed (Step S330). When the placing sections42,42have reached the first receiving position RP1, the first receiving position setting flag Frp1is set to 1, and the transport flag Fs of the second loading conveyor is reset to 0 (Step S332), and the present routine ends. In contrast, when it is determined that the placing sections42,42have not yet reached the first receiving position RP1, the process of Step S330is repeated until the placing sections42,42reach the first receiving position RP1. When the placing sections42,42have reached the first receiving position RP1, the first receiving position setting flag Frp1is set to 1, and the transport flag Fs of the second loading conveyor is reset to 0 (Step S332), and the present routine ends. Note that when the first receiving position setting flag Frp1is not 1 in Step S300, that is, when the placing sections42,42of the transport apparatus for temporary centering40are not at the first receiving position RP1, or when the loading completion flag Fvv1is not 1 in Step S302, that is when the loading of the log PW by the second loading conveyor4bhas not completed so that the placing sections42,42of the transport apparatus for temporary centering40can transport the log PW, or when the second delivery position setting flag Fdp2is not 1 in Step S306, that is, when the shaft receiving houses22a,22bare not at the second delivery position DP2, the present routine is terminated without doing anything. [Process of Driving the Log Rotation Apparatus] In the process of driving the log rotation apparatus, the CPU of the electronic control unit8determines whether or not the second receiving position setting flag Frp2is 1 (Step S400) and whether or not the first delivery position setting flag Fdp1is 1 (Step S402). When both of the second receiving position setting flag Frp2and the first delivery position setting flag Fdp1are 1, that is, the centering spindles24a,24b(shaft receiving houses22a,22b) have reached the second receiving position RP2and also the placing sections42,42have reached the first delivery position DP1, the fluid cylinders CL1a, CL1b(seeFIGS.2and3) are driven to approach the centering spindles24a,24bto each other, so that the log PW is held between the centering spindles24a,24bat the cut surfaces (both end surfaces in the longitudinal direction) of the log PW (Step S404), and the centering spindle holding flag Fcs is set to 1 (Step S406). At this point of time, the temporary rotation axis center line of the log PW is aligned with the rotation axis center line of the centering spindles24a,24b. Subsequently, a process of determining whether or not the elapsed descending time flag Ftd is 1 is executed (Step S408). When the elapsed descending time flag Ftd is 1, that is, when the placing sections42,42are lowered to a position where the log PW if rotated does not interfere with the placing sections42,42, the motor M1is driven, and the centering spindles24a,24bare rotated one time while the log PW is held therebetween, so that the outer peripheral shape of the log PW is measured (Step S410). Here, in the present embodiment, the measurement of the outer peripheral shape of the log PW is performed by using the distance from the plurality of laser measuring instruments17ato the outer peripheral surface of the log PW and a pulse output from a rotary encoder (not shown) of the motor M1, and measuring the distance to the outer peripheral surface of the log PW at a certain angle around the rotation axis center line of the centering spindles24a,24b. Thus, based on the measured outer shape of the log PW, the cutting axis center line of the log PW is calculated (Step S412), and the spindles24a,24bare rotated to align the log PW in the rotational direction at a rotation angle α corresponding to the calculated cutting axis center line of the log PW (Step S414). Here, the cutting axis center line of the log PW is defined as the rotation axis center line of the log PW from which a veneer sheet can be obtained with a highest yield from the log PW. In the state where the centering spindles24a,24bare at the second delivery position DP2, when the two intersections P1and P2between the cutting axis center line of the raw wood PW and the both mouth surfaces (both longitudinal end faces in the longitudinal direction) of the raw wood PW are viewed from one side in the direction along the rotational axis of the centering spindles24a,24b, the rotation angle α corresponding to the cutting axis center line of the log PW is defined, as shown inFIG.20, as a rotation angle of the log PW by which the virtual line VL1between the two intersections P1, P2moves to pass the rotation axis center line52a′ of the rotation axis52aof the rotary frame52of the pendular transport apparatus50, that is, the virtual line VL1moves to be aligned with one of the countless radiation lines around the rotation axis center line52a′. At the same time as aligning the log PW at the rotation angle α corresponding to the center line of the cutting axis, the fluid cylinders CL2a, CL2bare driven to move the shaft bearing housings22a,22btoward the second delivery position DP2(Step S416, seeFIG.21), and the second receiving position setting flag Frp2is reset to 0 (Step S418). Here, the second receiving position setting flag Frp2is set to 1 when the shaft bearing housings22a,22bare at the second receiving position RP2, and is reset to 0 when the shaft bearing housings22a,22bleave the second receiving position RP2. Subsequently, a process of determining whether or not the shaft bearing housings22a,22bhave reached the second delivery position DP2is executed (Step S420), and when the shaft bearing housings22a,22bhave reached the second delivery position DP2, movement of the shaft bearing housings22a,22bis stopped (Step S422), and the second delivery position setting flag Fdp2is set to 1 (Step S424). At this time, the shaft bearing housings22a,22bare located at a position away from the assumed maximum diameter among the diameters of the log PW to be fed to the log processing apparatus1with respect to the second receiving position RP2. In the present embodiment, when the sensor S4is turned off after being turned on, it is determined that the shaft bearing housings22a,22bhave reached the second delivery position DP2. In contrast, when it is determined in Step S420that the shaft bearing housings22a,22bhave not yet reached the second delivery position DP2, the processes in Steps S416to S420are repeatedly executed until the shaft bearing housings22a,22breach the second delivery position DP2. Next, a process of determining whether or not the clamping arm holding flag Fsa is 1 is executed (Step S426). Here, the clamping arm holding flag Fsa is set in the routine of the process of driving the a pendular transport apparatus which will be described later: it is set to 1 when the clamping arms56,56hold the log PW therebetween at the cut ends of the (both end surfaces in the longitudinal direction) of the log PW; and it is set to 0 when the holding is released. When the clamping arm holding flag Fsa is 1, that is, when the clamping arms56,56are holding the log PW therebetween at the cut ends of the (both end surfaces in the longitudinal direction) of the log PW (seeFIG.22), holding of the log PW between the centering spindles24a,24bat the cut ends of the (both end surfaces in the longitudinal direction) is released (Step S428). In contrast, when the clamping arm holding flag Fsa is not 1, that is, when the clamping arms56,56are not holding the log PW therebetween at the cut ends of the (both end surfaces in the longitudinal direction) of the log PW, the process of Step S426is repeatedly executed until the clamping arms56,56hold the log PW therebetween at the cut ends of the (both end surfaces in the longitudinal direction) of the log PW. When the clamping arm holding flag Fsa turns 1, that is, the clamping arms56,56hold the log PW therebetween at the cut ends of the (both end surfaces in the longitudinal direction) of the log PW, holding of the log PW between the centering spindles24a,24bat the cut ends of the (both end surfaces in the longitudinal direction) is released (Step S428). Then, the centering spindle holding flag Fcs is reset to 0 (Step S430), and the shaft bearing housings22a,22bare moved and returned to the second receiving position RP2(Step S432, seeFIG.23), and the second delivery position setting flag Fdp2is rest to 0 (Step S434). That is, when the log PW is delivered from the centering spindles24a,24bto the clamping arms56,56(Step S428), simultaneously the shaft bearing housings22a,22bare moved toward the second receiving position RP2(Step S432). Then, a process of determining whether or not the shaft bearing housings22a,22bhave reached the second receiving position RP2is executed (Step S436), and when the shaft bearing housings22a,22bhave reached the second receiving position RP2(FIG.24), the second receiving position setting flag Frp2is set to 1 (Step S438), and the present routine ends. In contrast, when it is determined in Step S436that the shaft bearing housings22a,22bhave not reached the second receiving position RP2, the processes in Steps S432to S436are repeated until the shaft bearing housings22a,22breach the second receiving position RP2. According to the present embodiment, the log PW is delivered from the centering spindles24a,24bto the clamping arms56,56(Step S428), and simultaneously, the shaft bearing housings22a,22bare moved toward the second receiving position RP2(Step S432). In addition, when the shaft bearing housings22a,22breturned to the second receiving position RP2, if the placing sections42,42have reached the first delivery position DP1(Step S402), that is if a new log PW is set to the second receiving position RP2, the new log PW is held between both centering spindles24a,24b(at both end faces in the longitudinal direction) (Step S404), and the new log PW is rotated to measure the outer peripheral shape (Step S410), and the cutting axis center line of the new log PW is calculated (Step S412), so that the new log PW is aligned at a rotation angle α corresponding to the cutting axis center line of the new log PW (Step S414). Also the shaft bearing housings22a,22bare moved to the second delivery position DP2(Steps S416to S424) for preparing delivery of the new log PW to the clamping arms56,56. That is, the new log PW can be prepared for delivery to be held between the clamping arms56,56without waiting for the process of delivering a log PW from the clamping arms56,56to the cutting spindles72a,72b(Step S518) which is executed by a processing routine of driving the pendular transport apparatus to be described later. As a result, it is possible to further reduce the time required for transporting the log PW to the cutting spindles72a,72b. When it is determined in Step S400that the second delivery position setting flag Fdp2is not 1, that is, when the shaft bearing housings22a,22bhave not reached the second receiving position RP2or when it is determined that the first delivery position setting flag Fdp1is not 1 in Step S402, that is, when the placing sections42,42have not reached the first delivery position DP1, if the elapsed descending time flag Ftd is determined not to be 1 in Step S408, that is, when the predetermined period of time Td* has not elapsed since the placing sections42,42were driven toward the first receiving position RP1, the present routine ends without doing anything. [Process of Driving the Pendular Transport Apparatus] In the process of driving the pendular transport apparatus, the CPU of the electronic control unit8executes a process of determining whether or not the second delivery position setting flag Fdp2is 1 (Step S500). When the second delivery position setting flag Fdp2is 1, that is, when the shaft bearing housings22a,22bof the log rotation apparatus20have reached the second delivery position DP2, The postures of the arms56,56are set according to the cutting axis center line of the log PW (Step S502). The “postures of the arms56,56are set according to the cutting axis center line of the log PW” is defined as a state where, as shown inFIG.20, the sliding axis center lines56c,56cof clamping arms56,56(the center line in the swinging direction of the clamping arms56,56passing through the rotation shaft52a) are aligned with the virtual line VL1connecting two intersections P1and P2of the log PW positioned at the rotation angle α (a virtual line VL1passing through the rotation axis center line52a′ when the centering spindles24a,24bare at the second delivery position DP2), and where the relative positional relationship between one clamping arm56and the intersection P1is the same as the relative positional relationship between the other clamping arm56and the intersection P2. Thus, when the postures of the clamping arms56,56corresponding to the cutting axis center line of the log PW are set, the motor M3is driven to rotate the rotary frame52so as to be in the set posture (rotation angle β, seeFIGS.20and22), and the motors M4and M4are driven to slide the clamping arms56,56(Step S504). Then, the fluid cylinders CL3a, CL3b(seeFIGS.2and3) are driven so that the clamping arms56,56approach to each other, and the cut surfaces (both end surfaces in the longitudinal direction) of the log PW are held by the clamping arms56,56respectively (Step S506), and the clamping arm holding flag Fsa is set to 1 (Step S508). Subsequently, a process of determining whether or not the centering spindle holding flag Fcs is 0 (Step S510), and when the centering spindle holding flag Fcs is 0, that is, when the holding of the log PW at the cut ends (both end surfaces in the longitudinal direction) of the log PW by the centering spindles24a,24bis released, a process of determining whether or not the preparation of the veneer lathe6is completed (Step S512). When the preparation of veneer lathe6has been completed, a process of delivering the log PW to the cutting spindles72a,72bis executed (see Step S518,FIGS.23and24). In contrast, when it is determined in Step S512that the preparation of veneer lathe6is not completed, the process of Step S512is repeatedly executed until the preparation of veneer lathe6is completed, and when the preparation of veneer lathe6is completed, the processing to deliver the log PW to cutting spindles72a,72bis executed (see Step S518,FIGS.23and24). Then, a process of determining whether or not the delivery of the log PW to the cutting spindles72a,72bis completed (Step S520). When the delivery of the log PW to the cutting spindles72a,72bis completed, the holding of the log PW at the cut ends (both end surfaces in the longitudinal direction) of the log PW by the clamping arms56,56is released (Step S522), the clamping arm holding flag Fsa is reset to 0 (Step S524), and the present routine ends. In contrast, in Step S520, when it is determined that the delivery of the log PW to the cutting spindles72a,72bhas not yet completed, the process of Step S520is repeatedly executed until the delivery of the log PW to the cutting spindles72a,72bis completed. When delivery of the log PW to the cutting spindles72a,72bis completed, the clamping arms56,56release the log PW at the cut ends (both end surfaces in the longitudinal direction) of the log PW (Step S522), and the clamping arm holding flag Fsa is reset to 0 (Step S524), and this routine ends. In contrast, when it is determined in Step S500that the second delivery position set flag Fdp2is not a value 1, that is, when the shaft bearing housings22a,22bof the log rotation apparatus20have not reached the second delivery position DP2, or when in Step S510it is determined that the centering spindle holding flag Fcs is not 0, that is, when the holding of the log PW at the cut ends (both end surfaces in the longitudinal direction) of the log PW by the centering spindles24a,24bhas not yet released, the present routine ends without doing anything. In the present embodiment, the second delivery position DP2and the sensor S4are configured to be arranged at the same position, but the present disclosure is not limited to thereto. For example, the second delivery position DP2may be arranged downstream of the position where the sensor4is located in the direction of log transport. In the case of this configuration, before the shaft bearing housings22a,22breach the second delivery position DP2, a new log PW can be prepared at the first delivery position DP1(the second receiving position RP2) by the transport apparatus for temporary centering40. In the present embodiment, whether or not the placing sections42,42have descended to a position where the rotated log PW does not interfere with the placing sections42,42is determined by performed by means of measuring the elapsed time from when the placing sections42,42are driven toward the first receiving position RP1, but the present disclosure is not limited to thereto. For example, it may be configured to determine whether or not the log PW has reached the that does not interfere with the placing sections42,42by measuring the movement distance of the placing sections42,42from when the placing sections42,42are driven toward the first receiving position RP1and determining whether or not the measured movement distance is long enough for the log PW not to interfere with the placing sections42,42even if rotated. In the present embodiment, the sensor S4is configured to determine whether or not the shaft bearing housings22a,22bhave reached the second delivery position DP2, but the present disclosure is not limited thereto. For example, it may be configured to determine whether or not the movement distance is equal to or larger than the assumed maximum diameter of the diameters of logs PW to be fed to the log processing apparatus1by measuring the movement distance of the shaft bearing housings22a,22b. The movement distances of the shaft bearing housings22a,22bcan be measured, for example, by detecting the strokes of the fluid cylinders CL2a, CL2b. In the present embodiment, the sensor S4detects whether the log PW transported by the placing sections42,42is not brought into contact with the log PW transported by the shaft bearing housings22a,22bof the log rotation apparatus20when the placing sections42,42of the transport apparatus for temporary centering40having the log PW are raised, but the present disclosure is not limited thereto. For example, a configuration may be adopted in which a temporary diameter of the log PW transported by the second loading conveyor4bis stored in a predetermined area of the ROM of the electronic control unit8and the calculated temporary diameter of the log PW is used to detect whether the log PW transported by the placing sections42,42is not in a state to contact the log PW transported by the shaft bearing housings22a,22beven if the placing sections42,42are raised. In this case, the process of driving the second loading conveyor inFIG.25and the process of driving the transport apparatus for temporary centering inFIG.26can be performed instead of the process of driving the second loading conveyor inFIG.6and the process of driving the transport apparatus for temporary center inFIG.7. In the case of this configuration, the sensor S4is used only to detect whether or not the shaft bearing housings22a,22bhave reached the second delivery position DP2. The process of driving the second loading conveyor inFIG.25is the same as the process inFIG.6except that the process of Step S216is changed to the process of Step S1216. The process of driving the transport apparatus for temporary centering inFIG.26is the same as the process inFIG.7except that the process of Step S306is changed to the processes of Steps S1306and S1307. Therefore, hereinafter, only the portions of the driving process ofFIG.25and the driving process ofFIG.26that are different from the process of driving the second loading conveyor ofFIG.6and the process of driving the transport apparatus for temporary centering ofFIG.7will be described. When the process of driving the second loading conveyor inFIG.25is executed, the CPU of the electronic control unit8executes the processes similar to those from Steps S200to S214of the processing routine for driving the second loading conveyor inFIG.6. Here, the electronic control unit8that executes the processes from Steps S208to S214to calculates the temporary diameter of the log PW using the integrated value of the pulses from when the sensor S3is turned on to when it is turned off is an example of the implementation configuration corresponding to a “measuring unit.” Then, a temporary diameter of the log PW is calculated using the integrated value of the pulses from when the sensor S3is turned on to when it is turned off, and the calculated temporary diameter of the log PW is stored in a predetermined area of the ROM, so that a process of calculating the distance L1from the center point of the temporary diameter of the log PW to the virtual vertical line VVL is executed (Step S1216). Subsequently, the processes similar to those from Steps S218to S222of the processing routine for driving the second loading conveyor inFIG.6is executed, and the present routine ends. Here, The electronic control unit8that calculates the temporary diameter of the log PW using the integrated value of the pulses from when the sensor S3is turned on to when it is turned off and that stores the calculated log PW temporary diameter in a predetermined area of the ROM is an example of an implementation configuration corresponding to the “storage unit” in the present disclosure. When the process of driving the transport apparatus for temporary centering inFIG.26is executed, the CPU of the electronic control unit8executes processes similar to those in Steps S300to S304in the processing routine for driving the transport apparatus for temporary centering inFIG.7. Then, when it is determined in Step S304that the centering spindle holding flag Fcs is 1, a process of calculating a placing section raising acceptable distance d* is executed (Step S1305). Here, the placing section raising acceptable distance d* is defined as a movement distance from the second receiving position RP2of the shaft receiving houses22a,22b, and with the distance, even if the placing sections42,42are raised, the log PW transported by the placing sections42,42is not brought in contact with the log PW transported by the shaft receiving houses22a,22b. The placing section raising acceptable distance d* can be obtained, from: the temporary diameter di-1of the log PW transported to the second receiving position RP2out of the temporary diameters of the log PW stored in the storage area of the ROM of the electronic control unit8; a temporary diameter diof the log PW transported from the second loading conveyor4bto the placing sections42,42; and a safety value Sv, by d*=(di-1+di)/2+Sv. The safety value Sv is a value that is set as the maximum value that can be assumed among the differences between the maximum diameter and the minimum diameter that can occur in one log PW. Here, the electronic control unit8that executes Step S1305for calculating the placing section raising acceptable distance d* is an example of an implementation configuration corresponding to the “calculation unit” in the present disclosure. Here, the placing section raising acceptable distance d* corresponds to the “safe distance” in the present disclosure, the temporary diameter di-1of a log corresponds to the “first temporary diameter” in the present disclosure, and the temporary diameter diof a log corresponds to the “second temporary diameter” in this invention, these all being examples of the implementation configuration. Subsequently, a process of determining whether or not the movement distance BL of the shaft bearing housings22a,22bfrom the second receiving position RP2is larger than the placing section raising acceptable distance d* is executed (Step S1306). Here, the movement distance BL can be measured, for example, by reading a value of a stroke sensor (not shown) disposed in the fluid cylinders CL2a, CL2b. When the movement distance BL is larger than the placing section raising acceptable distance d*, processes similar to Steps S308to S314of the processing routine for driving the transport apparatus for temporary centering inFIG.7and Steps S316to S332branched from the processing routine for driving the transport apparatus for temporary centering inFIG.8are executed, and this routine ends. In this configuration also, it is possible to achieve the same effect as that of the log processing apparatus1according to the present embodiment: for example, the time for transporting the log PW to the cutting spindles72a,72bcan be reduced. Here, the electronic control unit8that executes Step S1306for determining whether or not the movement distance BL of the shaft bearing housings22a,22bfrom the second receiving position RP2is larger than the placing section raising acceptable distance d* and the Step S308for raising the placing sections42is an example of an implementation configuration corresponding to the “control unit” in the present disclosure. In the modification illustrated inFIG.26described above, when the movement distance BL from the second receiving position RP2of the shaft bearing housings22a,22bbecomes larger than the placing section raising acceptable distance d*, the placing sections42,42are moved toward the second receiving position RP2to supply a log PW at the second receiving position RP2, but the present disclosure is not limited thereto. For example, configuration may be adopted in which, before the movement distance BL of the shaft bearing housings22a,22bbecomes larger than the placing section raising acceptable distance d*, the placing sections42,42are moved toward the second receiving position RP2within the range that does not cause contact with the log PW held between the first and second centering spindles24a,24b. In this configuration, the process of driving the transport apparatus for temporary centering inFIG.27may be performed instead of that inFIG.26. The process of driving the transport apparatus for temporary centering inFIG.27is the same with the process of driving the transport apparatus for temporary centering inFIG.26except that Steps S1305to S312are replaced with Steps S2305to S2308. Therefore, hereinafter, only the portions of process of driving the transport apparatus for temporary centering inFIG.27that are different from the process of driving the transport apparatus for temporary centering inFIG.26will be described. When the process of driving the transport apparatus for temporary centering inFIG.27is performed, the CPU of the electronic control unit8executes the processes similar to Steps S300to S304in the processing routine for driving the transport apparatus for temporary centering inFIG.26. When it is determined in Step S304that the centering spindle holding flag Fcs is 1, the process of calculating a safety distance ds is executed (Step S2305), the process of calculating the placing section raising acceptable distance dc* is executed (Step S2306). Here, the safety distance ds is defined as a distance in which the log PW transported by the placing sections42,42is not brought into contact with the log PW transported by the shaft bearing housings22a,22b, and is calculated using the accumulated value of pulses from when the sensor S3is turned on and to when the sensor S3is turned off. The safety distance ds can be obtained, from: the temporary diameter di-1of the log PW transported to the second receiving position RP2out of the temporary diameters of the log PW stored in the storage area of the ROM of the electronic control unit8; a temporary diameter diof the log PW transported from the second loading conveyor4bto the placing sections42,42; and a safety value Sv, by ds=(di-1+di)/2+Sv. Here, the safety value Sv is a value that is set as the maximum value that can be assumed among the differences between the maximum diameter and the minimum diameter that can occur in one log PW. In addition, the placing section raising acceptable distance dc* is set as a distance by which the ascendable are the placing sections42,42ascendable while keeping the above described safety distance ds. The placing section raising acceptable distance dc* is calculated as a movement distance of the placing sections42,42from the first receiving position RP1, and if the placing sections42,42holding the log PW is raised by the placing section raising acceptable distance dc*, the log PW on the placing sections42,42is not brought into contact with the log PW transported by the shaft bearing housings22a,22b. In the present embodiment, the placing section raising acceptable distance dc* is calculated, as shown inFIG.28, using a height Hbs from the reference line Bp on the placing sections42,42at the first receiving position RP1to the rotation axis center line of the first and second centering spindles24a,24bat the second receiving position RP2, an opening angle2θ of the placement surfaces42a,42aof the placing sections42,42, a movement distance BL of the shaft bearing housings22a,22bfrom the second receiving position RP2, and a safety distance ds, by the equations (4) and (5), the height Hbs and the opening angle2θ being measured in advance and stored in the ROM of the electronic control unit8. The height Hbc is a height between the reference line Bp and the temporary rotation axis center line of the log PW placed on the placing sections42,42. The height Hc is a height between the temporary rotation axis center line of the log PW placed on the placing sections42,42and the rotation axis center line of the centering spindles24a,24bat the second receiving position RP2when the distance between the axial center line of the first and second centering spindles24a,24bholding the log and the temporary rotational axis center line of the log PW transported by the placing sections42,42becomes equal to the safe distance ds. Here, the height Hbc corresponds to the “first distance” in the present disclosure, the movement distance BL corresponds to the “second distance” in the present disclosure, and the height Hc corresponds to the “third distance” in the present disclosure. The height Hbs corresponds to the “fourth distance” in the present disclosure, the opening angle2θ corresponds to the “geometric shape of the placing section” in the present disclosure, and the placing section raising acceptable distance dc* is an example of the implementation configuration corresponding to the “5th distance” in the present disclosure. The embodiment for obtaining the placing section raising acceptable distance dc* using the equations (4) and (5) is an example of the implementation configuration corresponding to “the controller calculates a first distance from the reference portion to a temporary rotation axis center line of the new log placed on the placing section, using the second temporary diameter and a geometric shape of the placing section when the first and second centering spindles are holding the log, the controller further uses the safe distance and the second distance by which the centering unit has moved from the second receiving position to the second delivery position, so as to calculate a third distance from a temporary rotation axis center line of the new log to the rotation axis center line of the first and second centering spindles with the centering unit being located at the second receiving position, when an axial distance between the rotation axis center line of the first and second centering spindles while the centering unit is moving from the second receiving position toward the second delivery position and the temporary rotation axis center line of the new log is the safe distance, and the controller further calculates a fifth distance by subtracting the first and third distances from a fourth distance, so as to cause the log feeding unit to move by the fifth distance, the fourth distance being a distance from the reference portion when the log feeding unit is at the first receiving position to the rotation axis center line of the first and second centering spindles when the centering unit is at the second receiving position.” [Mathematical 2] dc*=Hbs−Hc−Hbc(4) Hc=√(ds2−BL2) (5) Thus, when the placing section raising acceptable distance dc* is obtained, the placing sections42,42are raised by the placing section raising acceptable distance dc*, and the first receiving position setting flag Frp1and the loading completion flag Fvv1are reset to 1 (Step S2308). Then, processes similar to those in Step S314of the process routing for driving the transport apparatus for temporary centering inFIG.7and Steps S316to S328of the portion branched from processing routine for driving the transport apparatus for temporary centering inFIG.8, and the present routine ends. According to this configuration, the placing sections42,42are moved toward the first delivery position DP1to feed a new log PW to the second receiving position RP2, before the log PW held between the centering spindles24a,24breaches a position away from the second delivery position DP2, that is, the second receiving position RP2by a distance larger than the assumed maximum diameter of the diameters of logs PW to be fed to the log processing apparatus1, or a position away from the second receiving position RP2by the placing section raising acceptable distance d*. As a result, the time for transporting the log PW to the cutting spindles72a,72bcan be further reduced. Here, the electronic control unit8that executes Steps S2305to S2308is an example of an implementation configuration corresponding to the “control unit” in the present disclosure. In the present embodiment, when the second loading conveyor transport flag Fs is 1, that is, when the log PW has already been loaded onto the second loading conveyor4b(Step S200), and the first receiving position setting flag Frp1is 1, that is, when the placing sections42,42of the transport apparatus for temporary centering40are at the first receiving position RP1(Step S202), and the loading completion flag Fvv1is 0, that is, when the second loading conveyor4bhas not completed loading of the log PW to be transported by the placing sections42,42of the transport apparatus for temporary centering40(Step S204), the second loading conveyor4bis driven (Step S206) so that the log PW is fed to the placing sections42,42, but the present disclosure is not limited thereto. For example, a configuration may be adopted in which, in addition to the above conditions (steps S200to S204), a log PW is supplied to the placing sections42,42when the second delivery position set flag Fdp2is 1, that is, when the centering spindles24a,24b(shaft bearing housings22a,22b) holding the log PW therebetween at the cut ends (both end surfaces in the longitudinal direction of the log PW are at a position away from the second receiving position RP2by the assumed maximum diameter of the diameters of the logs PW to be fed to the log processing apparatus1. In this case, the process of driving the second loading conveyor driving process ofFIGS.29and30can be performed instead of the process of driving the second loading conveyor driving process ofFIG.6. In the process of driving the second loading conveyor driving process ofFIGS.29and30, the same processes as those ofFIG.6are executed except for the addition of Step S205for determining whether or not the second delivery position set flag Fdp2is 1, Step S205being performed between Step S204and Step S206of the process of driving the second loading conveyor driving process ofFIG.6. According to this configuration, inconvenience can be satisfactorily prevented, for example, when the first receiving position RP1and the first delivery position (second receiving position) are arranged close to each other, if a new log PW is fed to the first receiving position RP1while the first and second centering spindles24a,24bare holding another log PW, the new log PW fed to the first receiving position RP1is brought in contact with the log PW held between24a,24b. In the present embodiment, the sensors S1, S1are arranged so that the optical axis of the light emitted from the sensors S1, S1is at a position that intersects the virtual vertical lines VVL, VVL passing through the reference lines Bp, Bp set on the respective placing sections42,42, but the present invention is not limited to thereto. For example, a configuration may be adopted in which only one sensor S1is disposed at substantially the center portion of the coupling beam13in the longitudinal direction. In the present embodiment, the placing sections42,42are driven so that the temporary rotation axis center line of the log PW is obtained using the log PW detection by sensors S1, S1, and the temporary rotation axis center line of the log PW is aligned with the rotation axis center line of centering spindles24a,24b, but the present invention is not limited to thereto. For example, a configuration may be adopted in which the temporary rotation axis center line of the log PW is aligned with the rotation axis center lines of the centering spindles24a,24bwithout using the sensors S1, S1by the transport apparatus for temporary centering140of the modification illustrated inFIGS.31and32. The transport apparatus for temporary centering140of the modification has the same configuration as that of the transport apparatus for temporary centering40according to the present invention, except that it includes upper placing sections132,132as shown inFIGS.31and32. The upper placing sections132,132are disposed to face the placing sections42,42in the vertical direction. The upper placing sections132,132basically have the same shape as those of the placing sections42,42. That is, the upper placing sections132,132have generally V-shaped placement surfaces42a,42aand the reference lines Bp2on the placement surfaces42a,42arespectively. The transport apparatus for temporary centering140is an example of implementation configuration corresponding to the “log feeding unit” of the present invention. The upper placing sections132,132are each threadedly engaged with a male thread rod (not shown) arranged to extend in the vertical direction, and the male thread rod is connected to one end of the male thread rod (not shown) by a motor (not shown). By rotating forward and backward the male thread rod (not shown), the upper placing sections132,132are moved closer to and away from the placing sections42,42. The electronic control unit8controls motors M4, M4(not shown) so that a distance B1from the reference line Bp2of the upper placing sections132,132to the center axis of the centering spindles24a,24bthat are assumed to be at the second receiving position RP2is always the same with the distance B2from the reference line Bp of the placing sections42,42to the rotation axis center line of the spindles24a,24bthat are assumed to be at the second receiving position RP2. The transport apparatus for temporary centering140in the modification drives the placing sections42,42and the upper placing sections132,132to approach to each other when the log PW transported by the transport apparatus for temporary centering140is not brought into contact with the log PW transported by the shaft bearing housings22a,22bof the log rotation apparatus20even if the log PW is set at the first receiving position RP1and also the log PW is moved to the first delivery position DP1by the transport apparatus for temporary centering140, so that, as shown inFIG.32, the log PW is held between the placing sections42,42and the upper placing sections132,132. Here, determination whether or not the log PW is held between the placing sections42,42and the upper placing sections132,132can be done with load fluctuation of the motors M4, M4and motors not shown. In this way, when the log PW is held between the placing sections42,42and the upper placing sections132,132, the temporary rotation axis center line of the log PW (the center of the log PW in the vertical direction) is aligned with the rotation axis center line of the centering spindles24a,24b. In the present embodiment, the log PW is detected by the sensors S1, S1arranged between the first receiving position RP1and the first delivery position DP1toward downstream in the horizontal direction in the transport direction of the log PW, and the temporary rotation axis center line of the log PW is obtained using the detection of the log PW by the sensors S1, S1, so that the temporary rotation axis center line of the log PW is aligned with the rotation axis center line of the centering spindles24a,24b, but the present invention is not limited thereto. For example, a configuration may be adopted in which the temporary rotation axis center line of the log PW is obtained using the log PW detection by measurement sensors230instead of the sensors S1, S1, so that the temporary rotation axis center line is aligned with the centering spindles24a,24bas shown inFIG.33. In the present modification, as shown inFIG.33, fixed V-shaped tables262are disposed on the downstream end portion of the second loading conveyor4b, and the disposed positions are defined as the first receiving position RP1. The fixed V-shaped tables262include V-shaped placement surfaces262a,262athat open upward in the vertical direction. In the present modification, the intersection line between the two planes that constituted the V shape of the placement surfaces262a,262ais used as a reference line Bpf for obtaining the temporary rotation axis center line of the log PW, and the virtual vertical line VVL is defined as a straight line that passes the rotation axis center line of the centering spindles24a,24b, the reference line Bpf, and the reference line Bp of the placing section42. As shown inFIG.33, the measurement sensor230is arranged on the upper frames18so that the laser emitted from the measurement sensor230passes through the virtual vertical line VVL. In the present modification, as shown inFIG.33, a height Hfs in the vertical direction from the reference line Bpf set on the fixed V-shaped tables262and262to the axial center line of the centering spindles24a,24b, a height Hbs in the vertical direction from the reference line Bpf to the measurement sensor230, an opening angle2θ of the placement surface262a,262aof the V-shaped tables262and262, and a distance Hbb between the reference line Bp on the placing sections42,42at the initial position and the reference line Bpf are measured in advance and stored in the ROM of the electronic control unit8. When the measurement sensor230detects the log PW, using the equations (6) to (8), a distance L2by which the placing sections42,42need to be raised is obtained so that the temporary rotation axis center line of the log PW is aligned with the centering spindles24a,24b. [Mathematics 3] L2=Hfs+Hbb−Hfc(6) r=(Hbs−Hp)·cos θ/(1+cos θ) (7) Hfc=r/cos θ (8) In the equations, r is a virtual radius of the log PW (or a virtual circle corresponding to the log PW) that in contact with the two placement surfaces262aand262aand passes through a point at the top of the log PW detected by the measurement sensor230(intersection of the optical axis emitted from the measurement sensor230and the log PW); Hfc is the height from the reference line Bpf to the temporary rotation axis center line of the log PW: and Hp is the distance to the log PW measured by the measurement sensor230(distance in the direction along the virtual vertical line VVL). In the present modified example also, it is possible that, before passing the log PW to centering spindles24a,24b, the temporary rotation axis center line of the log PW is obtained and the temporary rotation axis center line is aligned with the centering spindles24a,24b. Thus, when the log PW is held between the centering spindles24a,24bat the cut ends (both end surfaces in the longitudinal direction) of the log PW, the deviation (deviation in axial centerline) between the cutting axis center line of the log PW and the rotation axis center line of the centering spindles24a,24bcan be reduced, and the fluctuation of the log PW can be reduced during rotation of the log PW by the centering spindles24a,24b. In the present embodiment, the pendular transport apparatus50is used to transport the log PW from the centering spindles24a,24bto the cutting spindles72a,72b, but the present invention is not limited to thereto. For example, as shown in the lathe charger302of the modification inFIG.34, a running transport apparatus350may be used instead of the pendular transport apparatus50. Alternatively, as shown in the lathe charger402of the modification inFIG.35, a fixed transport apparatus450may be used instead of the pendular transport apparatus50. The lathe charger302,402are examples of implementation configuration corresponding to the “log feeding apparatus” in the present invention. As shown inFIG.34, the running transport apparatus350of the modification has the same configuration as that of the pendular transport apparatus50of the above embodiment, except the moving beam352is supported, instead of the rotary frame52, by the rails R5, R5by means of the guided sliding members352a,352a, and that male thread rods355,355threadedly engaged with the moving beam352and motors M5, M5for driving the male thread rods355,355to rotate are included. Hence, among the components of the running transport apparatus350, the same components as those of the pendular transport apparatus50are denoted by the same reference numerals, and detailed description thereof is omitted to avoid duplication. The running transport apparatus350is an example of an implementation configuration corresponding to the “transport unit” in the present invention. The running transport apparatus350includes elongated moving beam352supported at both ends in the longitudinal direction by rails R5, R5installed on the upper surfaces of the upper frames18,18via sliding members352a,352awith guides, male threaded rods355,355threadedly engaged with the moving beam352, holders54,54attached to the moving beam352slidably in the longitudinal direction of the moving beam352, and clamping arms56,56movably supported to the holders54,54. The attachment of the holders54,54to the moving beam352is performed in a similar manner to that of the pendular transport apparatus50of the present embodiment described above. That is, the holders54,54are attached to rails (not shown) located along the longitudinal direction of the moving beam352using the guided sliding portions54a,54a(seeFIG.3), the rails being disposed on the lower surfaces of both ends excluding the central portion in the longitudinal direction of the moving beam352. The holders54,54are also connected to cylinder rods (not shown) of the fluid cylinders CL3a, CL3b(seeFIG.3) that are supported by a support wall (not shown) protruding downward toward the generally central lower surface of the moving beam352in the longitudinal direction. The holders54,54are further supported slidably in the longitudinal direction of the moving beam352in response to appearance of a cylinder rod (not shown). As shown inFIG.34, the male thread rods355,355are rotatably supported by the mount bases357,357,357,357through bearings (not shown). The mount bases357,357,357,357are fixed onto the upper surface of the upper frames18,18at the both ends in the horizontal direction in the transport direction of the log PW. One end of the male thread rods355,355in the axial center line direction is connected to a rotation shaft (not shown) of the motors M5, M5. The running transport apparatus350thus configured can reciprocate the moving beam352in the axial center line direction of the male thread rods355,355by rotating the male thread rods355,355forward and backward by the motors M5, M5. The motors M5, M5have a rotary encoder (not shown) and can move the moving beam352to a desired position. In the present modification, in the processing routine in Step S414for driving the log rotation apparatus inFIG.9, the log PW is rotated until the virtual line VL1(FIG.20) connecting the two intersections P1and P2faces the vertical direction, when the two intersections P1and P2(FIG.20) between the cutting axis center line of the log PW and the cut ends (both end surfaces in the longitudinal end faces in the longitudinal direction) of the log PW are viewed in the direction along the rotation axis center lines of the centering spindles24a,24b. In the processing routine in Step S504for driving the pendular transport apparatus inFIG.11, the motors M5, M5are driven to move the moving beam352in the axial center line direction of the male thread rods355and355so that the clamping arms56,56are in the posture as set, and also the motors M4, M4(seeFIG.3) are driven to slide the clamping arms56,56. As shown inFIG.35, the modified fixed transport apparatus450has the same configuration as the pendular transport apparatus50of the present embodiment described above except that the rotary frame52is changed to a fixed beam452. Accordingly, the same components as those of the pendular transport apparatus50among the components of the fixed transport apparatus450are denoted by the same reference numerals, and the detailed description thereof is omitted to avoid duplication. The fixed transport apparatus450is an example of an implementation configuration corresponding to the “transport section” in the present invention. The fixed transport apparatus450includes an elongated fixed beam452whose longitudinal ends in the longitudinal direction are fixed to the upper frames18,18, holders54,54attached to the fixed beam452so as to be slidable in the longitudinal direction of the fixed beam452, and clamping arms56,56slidably supported by the holders54,54. As shown inFIG.35, the fixed beam452includes a lower surface having an upward slope in the direction along the horizontal direction in the conveyance direction of the log PW. As a result, the holders54,54and the clamping arms56,56are supported by the fixed beam452in an inclined state with respect to the vertical direction. The clamping arms56,56have sliding axis center lines56c,56c(straight lines parallel to the direction of sliding of the clamping arms56,56, and the center in the width direction of the clamping arms56,56(left-right direction inFIG.35)), and the sliding axis center lines56c,56ccoincide with an arbitrary straight line passing through the rotation axis center line of the cutting spindles72a,72b. Note that the holders54,54are attached to the fixed beam452is the same manner as that of the pendular transport apparatus50of the present embodiment described above. That is, the holders54,54are attached to rails (not shown) located along the longitudinal direction of the fixed beam452using the guided sliding portions54a,54a(seeFIG.3), the rails being disposed on the lower surfaces of both ends excluding the central portion in the longitudinal direction of the fixed beam452. The holders54,54are also connected to cylinder rods (not shown) of the fluid cylinders CL3a, CL3b(seeFIG.3) that are supported by a support wall (not shown) protruding downward toward the generally central lower surface of the fixed beam452in the longitudinal direction. The holders54,54are further supported slidably in the longitudinal direction of the fixed beam452in response to appearance of a cylinder rod (not shown). In the lathe charger402of the present modification, instead of the configuration where the shaft bearing housings22a,22bare reciprocated between the second receiving position RP2and the second delivery position DP2by the fluid cylinders CL2a, CL2b(seeFIG.1) connected to the shaft bearing housings22a,22b, a configuration is adopted where the shaft bearing housings22a,22bare screw-engaged to the male thread rods455,455that are rotatably supported by the upper frames18,18to extend in the direction along the horizontal direction among the transport directions of the log PW, so that, by rotating the male thread rods455,455forward and reverse by motors M6, M6connected to one end of the male thread rods455,455in the axial center line direction, the shaft bearing housings22a,22bare moved between the second receiving position RP2and the second delivery position DP2. The motors M6, M6have a rotary encoder (not shown). As a result, the positions of the shaft bearing housings22a,22bcan be known, and the shaft bearing housings22a,22bcan be moved to desired positions. Therefore, the sensor S4is not necessary in this modification. In the present modification, the second delivery position DP2is defined as a position where the shaft bearing housings22a,22bare moved until the virtual line VL1connecting the two intersections P1and P2of the log PW positioned in the rotational direction at an angle equal to the inclination angle of the clamping arms56,56(inclination angle with respect to the vertical line) coincides with the sliding axis center lines56cand56cof the clamping arms56,56. Therefore, the determination whether or not the second delivery position setting flag Fdp2is 1 in Step S306in the processing routine for driving the transport apparatus for temporary centering that is executed by the electronic control unit8of the log processing apparatus1of the above described embodiment is performed by determining whether or not the moving distance of the shaft bearing housings22a,22bis greater than or equal to the assumed maximum diameter of diameters of the log PW to be fed to the log processing apparatus1, the moving distance being obtained using the integrated value of pulses output from the rotary encoders of motors M6, M6. In the present modification, in Step S414in the processing routine for driving the log rotation apparatus inFIG.9, the log PW is rotated by a rotation angle so that the virtual line VL1lies parallel to the sliding axis center lines56c,56cof the clamping arms56,56. Further, in the processing routine of Step S504for driving the pendular transport apparatus inFIG.11, the clamping arms56,56are driven by driving the motors M4, M4(seeFIG.3) so that the clamping arms56,56have the posture set by Step S502. In the present embodiment, the shaft bearing housings22a,22bare reciprocated between the second receiving position RP2and the second delivery position DP2by the fluid cylinders CL2a, CL2b(seeFIG.1), but the present invention is not limited thereto. For example, as shown in the log rotation apparatus520,620of the modification illustrated inFIGS.36and37, the screw mechanisms530,630may be used to move the shaft bearing housings22a,22bbetween the second receiving position RP2and the second delivery position DP2. The log rotation apparatus520of the modified example is the same as the log rotation apparatus20of the present embodiment except that the fluid cylinders CL2a, CL2b(seeFIG.1) are replaced with screw mechanisms530as shown inFIG.36It has a configuration. Therefore, the same components as those of the log rotation apparatus20in the log rotation apparatus520are denoted by the same reference numerals, and detailed description thereof is omitted because it is duplicated. The screw mechanism530includes male thread rods555,555that are rotatably supported by the lower frame12so as to extend in the direction along the horizontal direction among the transport directions of the log PW, and Motors M7, M7connected to the ends on one side of the male thread rods555,555. The male thread rods555and555are threadedly engaged with the shaft bearing housings22a,22b. Thus, the shaft bearing housings22a,22bcan be reciprocated between the second receiving position RP2and the second delivery position DP2by rotating the male thread rods555,555forward and backward by the motors M7, M7. The motors M7, M7have a rotary encoder (not shown), and are able to recognize the positions of the shaft bearing housings22a,22band to move the shaft bearing housings22a,22bto desired positions. Further, as shown inFIG.37, in the log rotation apparatus620of the modification, the shaft bearing housings22a,22bare disposed on the rails R1via the mount bases634a,634b, and are engaged with the male thread rods555,555. Except for this point, it has the same configuration as the log rotation apparatus520of the modified example described above. Therefore, among the components of the log rotation apparatus620, the same components as those of the log rotation apparatus520are denoted by the same reference numerals, and detailed description thereof is omitted because it is duplicated. As shown inFIG.37, the mount bases634a,634bhave a generally L shape when viewed from the side (when viewed in a direction orthogonal to both the axial center line direction of the male thread rods555.555and the vertical direction). The mount bases634a,634bare disposed on the rails R1via the guided sliding portions633a,633b. The mount bases634a,634bhave rails R6, R6extending in the vertical direction. The shaft bearing housings22a,22bare engaged with the rails R6, R6via the guided sliding portions623a,623b. The shaft bearing housings22a,22bare threadedly engaged with male thread rods655and655that are integrally connected to a rotation shaft (not shown) of the motor M9fixed to the mount bases634a,634b. That is, the shaft bearing housings22a,22bare installed on the mount bases634aand634bin such a manner that when the motor M9rotates the male thread rods655,655forward and backward, the shaft bearing housings22a,22bcan reciprocate in the vertical direction with respect to the mount bases634a,634b. The shaft bearing housings22a,22bare threadedly engaged with male thread rods555,555that are integrally connected to a rotation shaft (not shown) of the motors M8, M8installed on the front frame14, and are reciprocated in the axial center line direction of the male thread rods555,555on the mount bases634a,634bwhen the motors M8, M8rotate the male thread rods555,555forward and backward. The motors M8, M8, M9, and M9have a rotary encoder (not shown) so as to be able to recognize positions of the shaft bearing housings22a,22b, and to move the shaft bearing housings22a,22bin the horizontal direction (direction of the axial center line of the male thread rods555,555) and the vertical direction (direction of the axial center line of the male thread rods655and655) to desired positions. According to the log rotation apparatus620of the modification, the shaft bearing housings22a,22bare not only reciprocated between the second receiving position RP2and the second delivery position DP2but are also capable of reciprocating in the vertical direction. Therefore, when the log PW is delivered from the centering spindles24a,24bto the clamping arms56,56of the pendular transport apparatus50, it is not necessary to slide the clamping arms56,56. For this reason, the clamping arms56,56can be fixed with respect to the holders54,54so as not to be slidable. In the present embodiment, the outer peripheral surface of the log PW is measured by measuring the distance to outer peripheral surface with a plurality of laser measuring instruments17aarranged at equal intervals in the longitudinal direction of the coupling beam17, but the present disclosure is not limited to thereto. For example, as shown inFIG.38, a configuration may be adopted in which a plurality of laser measuring instruments17amay be replaced with one line laser717and a camera718that captures an image the log PW irradiated by the line laser717, so that the outer peripheral shape of the log PW is measured based on the image taken by the camera718. In the present embodiment, the transport apparatus for temporary centering40is located below the log rotation apparatus20in the vertical direction, but the present disclosure is not limited thereto. For example, the transport apparatus for temporary centering40may be located above the log rotation apparatus20in the vertical direction, or located upstream in the transport direction of the log PW in the horizontal direction of the log rotation apparatus20. In the present embodiment, the log PW is rotated one time by the centering spindles24a,24bto calculate the cutting axis center line of the log PW, and the log PW is set to the rotation angle α corresponding to the calculated cutting axis center line (Steps S406to S410) and then the shaft bearing housings22a,22bare moved toward the second delivery position DP2(Step S412). However, the shaft bearing housings22a,22bmay be moved toward the second delivery position DP2while the log PW is rotated one time by the centering spindles24a,24bto calculate the cutting axis center line of the log PW and the log PW is set to the rotation angle α corresponding to the calculated cutting axis center line. In this configuration, the number of logs to be fed per unit time to the first and second cutting spindles can be increased further. In the present embodiment and the above-described modifications, the log processing apparatus1includes the veneer lathe6for peeling a veneer sheet from the log PW, but the present disclosure is not limited thereto. For example, instead of the veneer lathe6, the log processing apparatus1may be configured to include a processing implement for eliminating the uneven portion on the outer periphery of the log PW. In this case, the processing implement includes a cutter that is driven to rotate, instead of the knife74. This embodiment shows an exemplary form for carrying out the present disclosure. Therefore, the present disclosure is not limited to the configuration of the present embodiment. The correspondence between each component of the present embodiment and each component of the present disclosure is shown below. REFERENCE SIGNS LIST 1Log processing apparatus (log processing apparatus)2Lathe charger (log feeding apparatus)4aFirst loading conveyor (loading apparatus)4bSecond loading conveyor (loading apparatus)6Veneer lathe (processor)8Electronic control device (control unit, measuring unit, storage unit,calculating unit)10Frame11Extending piece12Lower frame13Coupling beam14Front frame14aBottom wall14bVertical wall14cVertical wallRear frame15aBottom wall15bVertical wall15cVertical wall16Intermediate frame17Coupling beam17aLaser length measuring device18Upper frameLog rotation apparatus22aShaft bearing housing (centering unit)22bShaft bearing housing (centering unit)23aSliding portion with guide23bSliding portion with guide24aCentering spindle (first centering spindle, second centering spindle)24bCentering spindle (second centering spindle, first centering spindle)Transport apparatus for temporary centering (log feeding unit)42Placing section (placing section)42aPlacement surface43Sliding portion with guide44Male thread rod50Pendulum transport apparatus (transport unit)52Rotation frame52aRotation axis52a′ Rotation axis center line53Shaft bearing housing54Holder54aSliding portion with guide56Clamping arm (first clamping arm, second clamping arm)56aSliding portion with guide56bClaw56cSliding axis center line62Sprocket64Claw72aCutting spindle (first cutting spindle, second cutting spindle)72bCutting spindle (second cutting spindle, first cutting spindle)74Knife (blade)132Upper placing section140transport apparatus for temporary centering (log feeding unit)230Measurement sensor262Fixed V-shaped table262aPlacement surface302Lathe charger (log feeding apparatus350Running transport apparatus (transport unit)352Movable beam352aSliding member with guide355Male thread rod357Mount base402Lathe charger (log feeding apparatus)450Fixed transport apparatus (transport unit)452Fixed beam455Male thread rod520Log rotation apparatus530Thread mechanism555Male thread rod620Log rotation apparatus623aSliding portion with guide623bSliding portion with guide630Thread mechanism633aSliding portion with guide633bSliding portion with guide634aMount base634bMount base655Male thread rod717Line laser718CameraPW Log (log)S1Sensor (log detection sensor)S2SensorS3Sensor (measurement unit)S4SensorRP1First receiving position (first receiving position)RP2Second receiving position (second receiving position, receiving position)DP1First delivery position (first delivery position)DP2Second delivery position (second delivery position, delivery position)R1RailR2RailR3RailR4RailR5RailCL1aFluid cylinderCL1bFluid cylinderCL2aFluid cylinder (driving unit)CL2bFluid cylinder (driving unit)CL3aFluid cylinderCL3bFluid cylinderCL4aFluid cylinderCL4bFluid cylinderM1MotorM2MotorM3MotorM4MotorM5MotorM6MotorM7MotorM8MotorM9MotorBp Reference line (Reference portion)Bp2Reference lineBpf Reference lineVVL Virtual vertical lineCH ChainP1IntersectionP2IntersectionHss Height in a vertical direction from Sensor S1to rotation axis center line of centering spindle (seventh distance)Hbp Height by which a reference line Bp has moved (movement amount)Hs1Height in a vertical direction from reference line Bp to Sensor S1(sixth distance)Hbc Height from reference line Bp to a temporary rotation axis center line of log PWL1Distance from the center point of temporary diameter of log to virtual vertical lineL2Distance by which placing sections are required to move so that a temporary rotation axis center line of log PW is aligned with the rotation axis center line of centering spindles2θ Opening angle of placement surface of placing section (geometric shape of placing section)r Virtual diameter of log (virtual diameter of log)VC Virtual circleα Rotation angleβ Rotation angleVL1Virtual vertical lineFs Second loading conveyor delivery flagFrp1First receiving position setting flagFvv1Lading completion flagFcs Centering spindle holding flagFdp2Second delivery position setting flagFdp1First delivery position setting flagFrp2Second receiving position setting flagFsa Clamping arm holding flagHfs Height in a vertical direction from a reference line Bpf to the rotation axis center line of the centering spindleHbb Distance between a reference line Bpf and a reference line Bp on a placing section at an initial positionHbs Height in a vertical direction from a reference line Bpf to a measurement sensor (fourth distance)Hbc Height from a reference line Bp to a temporary rotation axis center line of log PW (first distance)Hfc Height from a reference line Bpf to a temporary rotation axis center line of logHp Distance to log measured by measurement sensord* Placing section raising acceptable distance (safe distance)di-1Temporary diameter of log transported to second receiving position (firsttemporary diameter)diTemporary diameter of log transported from a second loading conveyor to a placing section (second temporary diameter)Sv Safe valueBL Movement distance (Second distance)ds Safe distance (safe distance)dc* Placing section raising acceptable distance (five distance)Hc Height (third distance) from a temporary rotation axis center line of log PW to rotation axis center line of centering spindles24a,24bat second receiving position RP2B1DistanceB2Distance | 116,424 |
11858165 | DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS FIG.1shows a top view of a continuous milling machine1for the profiling of one or more edges2of panels3. In this case it relates to a continuous milling machine of the double-end-tenoner type for the milling of the short opposite edges of rectangular floor panels. The principles of the invention may be translated mutatis mutandis to a similar milling machine for the profiling of the opposite long edges. FIGS.2and3show further views of the same continuous milling machine1. The references used, if not defined here, are defined in the appended claims. In particular, it is shown inFIG.3that at the location of the cutting blade9, a rotating milling cutter11is active for machining the edge2. The edge2is supported for both machining operations by sliding surfaces6of the same slide shoe4and pressure shoe5. It is not shown in the example that the opposite edge2of the panel for example is machined similarly, namely at least with one non-rotating cutting blade9that forms a bevelled edge13, with the difference that the coupling means16on the opposite edge2is configured as a groove instead of a tooth17in the case of the edge illustrated. It is clear that in the embodiment illustrated, the cutting blade9is provided on the slide shoe4, and that it can be provided, mutatis mutandis, on the pressure shoe5. It is not necessary that a cutting blade9that is provided on the slide shoe4, or respectively pressure shoe5, machines the surface7, or respectively surface8, that is led over the sliding surface6of the respective slide shoe4. It is namely possible that a cutting blade9mounted on the slide shoe4machines the opposite surface8, for example because the cutting blade9in question machines the opposite surface8via a bridge that bridges over the thickness of the panel, and vice versa for a cutting blade mounted on the pressure shoe5. Furthermore, it is also possible that the cutting blade9is used for machining a portion of the substrate15, without a portion of the surfaces7-8necessarily being removed for this. The substrate15may be provided with a coating, for example formed by a transparent thermoplastic layer14. The cutting blade9may, as stated in the introduction, be used as a roughing operation. As stated in the introduction, such a blade may also be fixed firmly to the machine bed18in some other way. The machine bed18is only shown schematically inFIG.3, but a person skilled in the art is sufficiently aware that a machine bed18relates to the reference construction of the continuous milling machine1. FIG.4shows a detail view along line II-II shown inFIG.1of a first embodiment of a cutting device according to the invention. The slide shoe4inFIG.4is made as one piece. The slide shoe4has sliding surfaces6for guiding a surface of a panel to be milled over them. A cutting blade9is fastened to this slide shoe4. The cutting blade9is fastened to the slide shoe4by means of two bolts21in a slot23of the cutting blade. By loosening these two bolts21, the vertical position of the cutting blade9can be adjusted by means of a set screw25placed obliquely, after which the cutting blade is fixed to the slide shoe with the two bolts. The slide shoe is configured so that space27is available for mounting of and the operation of a rotating milling cutter. The one-piece slide shoe can, by means of its sliding surfaces, support the panel in the positions before and after the engagement of the rotating milling cutter. This milling cutter is not shown inFIG.4. At the location of this slide shoe, a milling operation may thus be carried out by means of a rotating milling cutter, simultaneously with the forming of a portion of the profiled edge region of the panel by means of the cutting blade9. FIG.5shows a detail view along line II-II shown inFIG.1of a second embodiment of a cutting device according to the invention. The slide shoe4fromFIG.4is made in two parts. The slide shoe4has sliding surfaces6for guiding a surface of a panel to be milled over them. A cutting blade9is fastened to this slide shoe4. The cutting blade9is fastened to the slide shoe4by means of two bolts21in a slot23of the cutting blade. By loosening these two bolts21, the vertical position of the cutting blade9can be adjusted by means of a set screw25placed obliquely, after which the cutting blade is fixed to the slide shoe with the two bolts. The slide shoe is configured so that space27is available for the mounting and operation of a rotating milling cutter. This milling cutter is not shown inFIG.5. At the location of this slide shoe, a milling operation may thus be carried out by means of a rotating milling cutter, simultaneously with the forming of a portion of the profiled edge region of the panel by means of the cutting blade9. The one part of the slide shoe can support the panel by means of its sliding surface for engagement of the rotating cutting tool, whereas the other part of the slide shoe can support the panel by means of its sliding surface after engagement of the rotating cutting tool. The present invention is by no means limited to the embodiments described above, and such slide shoes and/or pressure shoes, cutting devices, continuous milling machine and methods for the manufacture of panels can be realized while remaining within the scope of the present invention. | 5,351 |
11858166 | Reference numbers in the figures,1—transverse chain feeding device,2—longitudinal roller feeding device,3—gluing device,4—unit narrow board;11—fixed chain transport mechanism,111—first servo drive unit,112—first edge baffle,113—first conveyor chain assembly;12—movable chain transport mechanism,121—second servo drive unit,122—caster,123—second edge baffle,124—second conveyor chain assembly;13—horizontal linear rolling guide rail,14—rolling slide block,15—guide wheel,16—transverse frame;21—lifting power roller assembly,211—lifting power roller,212—lifting slide block,213—vertical linear guide rail,214—lifting power roller motor,215—chain wheel,216—first chain,217—lifting cylinder,218—lifting slide seat;22—front pneumatic compression roller assembly,221—front compression cylinder,222—front fisheye joint,223—front compression roller seat,224—front compression roller with bearing;2241—first front compression roller with bearing,2242—second front compression roller with bearing;225—front rolling slide block,226—front rolling guide rail;23—longitudinal feed roller assembly,231—longitudinal feed motor,232—coupling,233—worm gear reducer,234—longitudinal feed roller;24—guide roller,25—first sensor,26—second sensor,27—third sensor,28—longitudinal feed frame,281—first sensor bracket,282—second sensor bracket,283—third sensor bracket;29—rear pneumatic compression roller assembly,291—rear compression cylinder,292—rear fisheye joint,293—rear compression roller seat,294—rear compression roller with bearing;2941—first rear compression roller with bearing,2942—second rear compression roller with bearing,2943—third rear compression roller with bearing;2944—fourth rear compression roller with bearing,2945—fifth rear compression roller with bearing;2946—sixth rear compression roller with bearing;31—glue pump,32—glue barrel,33—glue application head,34—glue delivery hose,35—slide table fixed frame,36—pneumatic slide table,361—slide table cylinder,362—slide table guide rail,363—slide table seat. DETAILED DESCRIPTION OF THE EMBODIMENTS The specific embodiments of the present disclosure will be further described in combination withFIGS.1to16. An unit narrow board glue application device suitable for high-frequency hot pressing of solid wood edge glued panel, including a transverse chain feeding device1, a longitudinal roller feeding device2and a gluing device3. The transverse chain feeding device1is used to transversely transport the unit narrow board4to the longitudinal roller feeding device2, and also acts as a buffer bin for the unit narrow board4. The longitudinal roller feeding device2is used to turn the unit narrow board4from the transverse motion to the longitudinal feeding movement one by one in order to make the unit narrow board4carry out side gluing during the feeding process. The gluing device3is used for gluing the side of the unit narrow board4. The transverse chain feeding device1includes a fixed chain transport mechanism11, a movable chain transport mechanism12, a horizontal linear rolling guide rail13, a rolling slide block14, guide wheels15and a transverse frame16, wherein a front end of the fixed chain transport mechanism11is fixedly installed on a ground by a support leg, and a rear end is supported on the longitudinal feed frame28in the longitudinal roller feeding device2. A front end of the movable chain transport mechanism12is supported on the ground by a support leg and a caster122, a rear end is supported on the horizontal linear rolling guide rail13by the rolling slide block14, the horizontal linear rolling guide rail13is fixed on the transverse frame16, and the transverse frame16is fixed on the ground; and the movable chain transport mechanism12is capable to move along the horizontal linear rolling guide rail13, so as to fit the unit narrow board4with different length. There are eight of the guide wheels15in total, which are arranged vertically along the axes of the guide wheels15, arranged along a longitudinal feed direction of the unit narrow board4, and fixed on the top of the transverse frame16to limit a continuous transverse motion of the unit narrow board4and serve as a guide reference for a longitudinal feed of the unit narrow board4. The longitudinal roller feeding device2includes a lifting power roller assembly21, front pneumatic compression roller assemblies22, a longitudinal feed roller assembly23, rear pneumatic compression roller assemblies29, guide rollers24, a first sensor25, a second sensor26, a third sensor27, a longitudinal feed frame28, a first sensor bracket281, a second sensor bracket282and a third sensor bracket283. Further, the lifting power roller assembly21includes two lifting power rollers211, two sets of lifting slide blocks212, two vertical linear guide rails213, a lifting power roller motor214, a chain wheel215, a first chain216, a lifting cylinder217and a lifting slide seat218, wherein the two lifting power rollers211, the lifting power roller motor214and a chain drive pair composed of the chain wheel215and the first chain216are arranged on the lifting slide seat218; the lifting power roller motor214drives the lifting power roller211to rotate through the chain drive pair composed of the chain wheel215and the first chain216, and a speed of the lifting power roller motor214is stepless adjusted. The two vertical linear guide rails213are fixed at a lower part of the longitudinal feed frame28, and the lifting slide seat218is installed on the two vertical linear guide rails213through the two sets of lifting slide blocks212. A cylinder barrel of the lifting cylinder217is arranged at the lower part of the longitudinal feed frame28, a piston rod of the lifting cylinder217is connected with a lower part of the lifting slide seat218, and the lifting cylinder217drives the lifting slide seat218to rise and fall along the two vertical linear guide rails213, so as to control the two lifting power rollers211to rise or fall. The two lifting power rollers211are respectively located on both sides of a rear end of the fixed chain transport mechanism11. Further, there are two sets of the front pneumatic compression roller assemblies22, which are installed on an upper part of the longitudinal feed frame28and located above an upper part of the lifting power roller assembly21, respectively corresponding to the two lifting power rollers211. Further, the longitudinal feed roller assembly23includes a longitudinal feed motor231, a coupling232, six worm gear reducers233, and six longitudinal feed rollers234, wherein the longitudinal feed motor231is connected with a worm of a sixth worm gear reducer233along the longitudinal feed direction of the unit narrow board4, and the worms of two adjacent worm gear reducers233are connected through the coupling232. Each of the six longitudinal feed rollers234is installed on a worm gear shaft of each of the six worm gear reducers233, and the longitudinal feed motor231and the six worm gear reducers233are arranged on the upper part of the longitudinal feed frame28. The installation heights of the six worm gear reducers233are identical, and the installation heights of the six longitudinal feed rollers234are identical. The six longitudinal feed rollers234are driven by the longitudinal feed motor231through the six worm gear reducers233, and the speed and the rotation direction of the six longitudinal feed rollers234are the same. The speed of the longitudinal feed motor231is capable to be adjusted steplessly. The diameter of the six longitudinal feed rollers234is the same as that of the two lifting power rollers211. Further, there are six sets of the rear pneumatic compression roller assemblies29, which are installed on the upper part of the longitudinal feed frame28and located above the longitudinal feed roller assembly23, respectively corresponding to the six longitudinal feed rollers234. Further, there are six of the guide rollers24, axes of six of the guide rollers24are vertically arranged; the guide rollers24are installed on the longitudinal feed frame28and arranged in the same line with the eight of the guide wheels15, and a diameter of the guide rollers24is the same as that of the guide wheels15, which are configured to provide a longitudinal feed reference for the unit narrow board4. Further, along the longitudinal feed direction of the unit narrow board4, the first sensor25is located between a first front compression roller with bearing2241and a second front compression roller with bearing2242, and is arranged on the upper part of the longitudinal feed frame28through the first sensor bracket281. The second sensor26is located between the second front compression roller with bearing2242, and the first rear compression roller with bearing2941, and is installed on the upper part of the longitudinal feed frame28through the second sensor bracket282. The third sensor27is located between the second rear compression roller with bearing2942and the third rear compression roller with bearing2943, and is installed on the upper part of the longitudinal feed frame28through the third sensor bracket283. The gluing device3includes a glue pump31, a glue barrel32, a glue application head33, a glue delivery hose34, a slide table fixed frame35, and a pneumatic slide table36. Wherein the pneumatic slide table36includes a slide table cylinder361, a slide table guide rail362, and a slide table seat363; an upper surface of the slide table cylinder361is provided with a sliding groove matched with the slide table guide rail362; the slide table seat363is fixed on the slide table fixed frame35, and the slide table fixed frame35is fixed on the longitudinal feed frame28; along the longitudinal feed direction of the unit narrow board4, the gluing device3is located between the fourth longitudinal feed roller234and the fifth longitudinal feed roller234in the longitudinal feed roller assembly23; a first glue inlet of the glue pump31is connected to the glue barrel32through a pipe, one end of the glue delivery hose34is connected to the glue outlet of the glue pump31, and the other end is connected to the glue application head33. The rotation speed of the glue pump31is capable to be adjusted steplessly. An upper part of the glue application head33is fixed on a lower surface of a cylinder block of the slide table cylinder361, and the cylinder block of the slide table cylinder361is suspended on the slide table guide rail362through a sliding groove on the upper surface of the slide table cylinder361. The slide table cylinder361is a double cylinder parallel piston type single-piston rod double-acting cylinder, and a piston rod of the slide table cylinder361is connected with the slide table seat363. The cylinder block of the slide table cylinder361is movable, so as to make the glue application head33to move in the direction perpendicular to the longitudinal feed direction of the unit narrow board4. Along the longitudinal feed direction of the unit narrow board4, when a rodless cavity of the slide table cylinder361is filled with compressed air, a front end face of the glue application head33moves to a glue application position 2 mm to a left side of a guide surface composed of six of the guide rollers24, waiting for the unit narrow board4. Further, the glue application head33is a hollow structure, a rear end face of the glue application head33is provided with a second glue inlet, a front end face of the glue application head33is a vertical plane parallel to the longitudinal feed direction of the unit narrow board4, and there are chamfered inclined planes between the front end face and the two sides of the glue application head33. A plurality of horizontal glue guiding grooves with identical width and depth vertically arranged and distributed from a middle to a left on the front end face, and a spacing between two adjacent horizontal glue guiding grooves is equal. Small round holes for glue liquid discharge are provided at a right end of each of the plurality of horizontal glue guiding grooves, the small round holes are communicated with a chamber in the glue application head33, and all the small round holes are arranged on the same vertical line. The right end of each of the plurality of horizontal glue guiding grooves is a semi-circular closed structure, while the left end of each of the plurality of horizontal glue guiding grooves is an open structure, so as to guide glue liquid discharged from the small round holes, so that the glue liquid adhered to a side of the unit narrow board4forms parallel glue lines. Optional, the fixed chain transport mechanism11includes a first servo drive unit111, a first edge baffle112and a first conveyor chain assembly113; the first conveyor chain assembly113includes a second chain, a driving chain wheel, a driven chain wheel, a chain carrier plate and a support plate, and the first conveyor chain assembly113is driven by the first servo drive unit111; along the transverse feed direction of the unit narrow board4, the first edge baffle112is located on a left side of the first conveyor chain assembly113and configured to align a left end of the unit narrow board4. Optional, the movable chain transport mechanism12includes a second servo drive unit121, a caster122, a second edge baffle123and a second conveyor chain assembly124; a structure of the second conveyor chain assembly124is the same as that of the first conveyor chain assembly113, and the second conveyor chain assembly124is driven by the second servo drive unit121; along the transverse feed direction of the unit narrow board4, the second edge baffle123is located on a right side of the second conveyor chain assembly124and configured to limit a right movement of the unit narrow board4; and the running speed and direction of the second conveyor chain assembly124are consistent with those of the first conveyor chain assembly113. Optional, each of the front pneumatic compression roller assemblies22includes a front compression cylinder221, a front fisheye joint222, a front compression roller seat223, a front compression roller with bearing224, a front rolling slide block225, a front rolling guide rail226; the front compression cylinder221is fixed on a beam at the upper part of the longitudinal feed frame28, a piston rod of the front compression cylinder221is connected with the front compression roller seat223through the front fisheye joint222, the front compression roller with bearing224is installed on the front compression roller seat223through a shaft, the front compression roller seat223is connected with the front rolling slide block225, the front rolling slide block225is installed on the front rolling guide rail226, and the front rolling guide rail226is vertically fixed on the upper part of the longitudinal feed frame28, the front compression cylinder221is capable to drive the front compression roller with bearing224to rise and fall, so as to realize the compression to the unit narrow board4. The structure and installation method of each of the rear pneumatic compression roller assemblies29are the same as those of the front pneumatic compression roller assemblies22. Each of the rear pneumatic compression roller assemblies29includes a rear compression cylinder291, a rear fisheye joint292, a rear compression roller seat293, a rear compression roller with bearing294, a rear rolling slide block, and a rear rolling guide rail; the rear compression cylinder291is fixed on a beam at the upper part of the longitudinal feed frame28, a piston rod of the rear compression cylinder291is connected with the rear compression roller seat293through the rear fisheye joint292, the rear compression roller with bearing294is installed on the rear compression roller seat293through a shaft, the rear compression roller seat293is connected with the rear rolling slide block, the rear rolling slide block is installed on the rear rolling guide rail, and the rear rolling guide rail is vertically fixed on the upper part of the longitudinal feed frame28, the rear compression cylinder291is capable to drive the rear compression roller with bearing294to rise and fall, so as to realize the compression to the unit narrow board4. Optional, a height of the glue application head33meets a requirement of a maximum thickness of the unit narrow board4, and the number of small round holes for gluing on a front end face of the glue application head33is determined according to the thickness of the unit narrow board4. When the specification of the thickness of the unit narrow board4changes, the glue application head33can be replaced with a corresponding number of small round holes for glue liquid discharge. A working method of the unit narrow board glue application device suitable for high-frequency hot pressing of solid wood edge glued panel, including the following steps:Step 1, starting the longitudinal feed motor231and the lifting power roller motor214to rotate the six longitudinal feed rollers234and the two lifting power rollers211, and adjusting the rotation speed; then starting the first servo drive unit111and the second servo drive unit121to make the fixed chain transport mechanism11and the movable chain transport mechanism12operate synchronously, and adjusting the operation speed; injecting compressed air into a rod cavity of the lifting cylinder217to lower the two lifting power rollers211to a lowest initial position; at the same time, injecting compressed air into rod cavities of the front compression cylinder221of the two sets of the front pneumatic compression roller assemblies22located above the lifting power roller assembly21, so that the two front compression rollers with bearings224are lifted to a highest initial position; then injecting compressed air into the rodless cavities of the rear compression cylinders291of the six sets of rear pneumatic compression roller assemblies29located above the longitudinal feed roller assembly23to lower six rear compression rollers with bearings294to a predetermined working height; injecting compressed air into a rod cavity of the slide table cylinder361to make the glue application head33at an initial position on a right side; setting the rotation speed of the glue pump31to control glue liquid of the glue pump31according to requirements of a longitudinal feed speed and a glue amount of the unit narrow board4.Step 2, placing a plurality of unit narrow boards4on the transverse chain feeding device1by a manipulator or other loading device, and a left end of the unit narrow board4is aligned with the first edge baffle112; the unit narrow board4is transferred to a rear end of the transverse chain feeding device1, so that a side of the unit narrow board4to be glued is close to the guide wheels15and the guide rollers24; along the longitudinal feed direction of the unit narrow board4, a front section of the unit narrow board4is located at the lifting power roller assembly21.Step 3, sending, by a control system, a signal to control the first servo drive unit111and the second servo drive unit121to stop synchronously when the first sensor25detects that the unit narrow board4reaches the rear end of the transverse chain feeding device1, so as to stop a transverse feeding movement; at the same time, the lifting power roller assembly21is lifted under an action of the lifting cylinder217, so that the height of the two lifting power rollers211is the same as that of the six longitudinal feed rollers234; at the same time, the two front compression rollers with bearings224located above the lifting power roller assembly21are driven down by the front compression cylinder221, and then to press on an upper surface of the unit narrow board4, so that the unit narrow board4starts a longitudinal feed motion under a longitudinal feed driving force; when the unit narrow board4enters a region of the longitudinal feed roller assembly23, the longitudinal feed motion continues under an action of the longitudinal feed rollers234and the six rear compression rollers with bearings294located above the longitudinal feed rollers234.Step 4, sending, by the control system, a control signal to the lifting cylinder217and the front compression cylinder221, the glue pump31, and the slide table cylinder361when the third sensor27detects a front end of the unit narrow board4, so as to control the two lifting power rollers211to lower and the two front compression rollers with bearings224to lift, to disengage from the unit narrow board4and return to the initial positions; at the same time, the piston rod of the slide table cylinder361is controlled to stretch out, and the cylinder block moves to a glue position with the glue application head33towards the unit narrow board4; at the same time, the glue pump31is controlled to start and supply glue liquid to the glue application head33; with the longitudinal feed of the unit narrow board4, when the front end of the unit narrow board4contacts the chamfered inclined planes of the glue application head33, a right lateral thrust is generated on the glue application head33, which overcomes a force of the slide table cylinder361and makes the glue application head33move backward to the right, so that a front end face of the glue application head33is floating and pressing on the side of the unit narrow board4under an action of the slide table cylinder361; the glue liquid is discharged from the small round holes on the front end face of the glue application head33and adhered to the side of the unit narrow board4, with the longitudinal feed of the unit narrow board4, the glue liquid forms multiple horizontal glue lines with equal spacing on the side of the unit narrow board4, and the glue lines are evenly distributed on a gluing surface under an action of pressure during the subsequent assembly and hot pressing; with continued feeding of the unit narrow board4along the longitudinal direction, sending, by the control system, a delay control signal to the glue pump31when the third sensor27detects a rear end of the unit narrow board4, so that the side of the unit narrow board4is completely glued, and then the glue pump31is controlled to stop, and the piston rod of the slide table cylinder361is retraced, and the cylinder block moves back with the glue application head33to separate away from the unit narrow board4, so as to complete the gluing of the first unit narrow board4.Step 5, sending, by the control system, a signal to the transverse chain feeding device1to make the first servo drive unit111and the second servo drive unit121start synchronously again when the second sensor26detects that the rear end of the unit narrow board4is passing, so that the fixed chain transport mechanism11and the movable chain transport mechanism12resume operation, and the unit narrow board4is moved laterally by a distance of a width of the unit narrow board4; at the same time, the longitudinal roller feeding device2and the gluing device3are started again to carry out a transverse feeding, a longitudinal feeding and a side glue application of a next unit narrow board4, so as to cycle back and forth. The above description is one of the embodiments of the disclosure, and the modified solutions which is not departed from the spirit of the present disclosure made by the ordinary skilled person in the art, are still within the scope of the disclosure. Obviously, the described embodiments are only part of the embodiments of the disclosure, not all of them. Based on the embodiments in the disclosure, other embodiments obtained by ordinary skilled person in the art are within the scope of the disclosure. | 23,611 |
11858167 | DETAILED DESCRIPTION OF THE INVENTION Concerning the subsequent description, the spatial directions which are aligned orthogonally to one another and which are denoted as the x-direction, the y-direction and the z-direction are referred to. The z-direction can also be denoted as the height direction and runs vertically. The x-direction and the y-direction are horizontal directions. Firstly, the basic construction of the bench saw is to be dealt with. The subsequent explanation expediently applies to all bench saws which are mentioned here, in particular to the bench saws10,20,30which are shown inFIGS.2and6. Preferably, all the bench saws which are mentioned here are the same bench saw. The bench saw10,20,30by way of example is designed as a circular bench saw. In particular, the bench saw10,20,30is a semi-stationary tool. The bench saw10,20,30comprises a saw blade2, in particular a circular saw blade. Expediently, the bench saw10,20,30comprises a drive unit86for the drive, in particular for the rotation drive, of the saw blade2. The drive unit86expediently comprises an electric motor. The bench saw10,20,30comprises a support structure12which by way of example is designed as a table. The support structure12comprises a lay-on section1, whose lay-on section upper side8serves as a lay-on surface8for a workpiece11which is to be machined with the saw blade2. The lay-on section1by way of example is designed in a plate-like manner, in particular as a bench plate. The lay-on section1expediently has a rectangular base surface. The lay-on surface8is expediently rectangular. The lay-on surface8is aligned normally to the z-direction and is preferably plane. The support structure12further comprises several stand legs7—by way of example four stand legs7which are arranged at the four corner regions of the lay-on section1—via which the lay-on section1is supported with respect to the base. Expediently, free regions which by way of example extend more than three-quarters of the vertical extension of the stand legs7are present between the stand legs7. In particular, no trim, in particular no housing wall is present between two or more stand legs7. A user can grip through and between two stand legs7into the free space which is present below the lay-on section lower side14. An opening, by way of example a slot, through which the saw blade2engages, is present in the lay-on section1, in particular the lay-on surface8. The opening is aligned with its longitudinal extension in the x-direction. A part of the saw blade2is located above the lay-on section1, in particular above the lay-on surface8, and a further part of the saw blade is located below the lay-on section1, in particular below the lay-on section lower side14. Purely by way of example, the bench saw10,20,30has a safety function, which is automatically activated given a detected contact between the saw blade2and a body part of a person. Expediently, the bench saw10,20,30is designed, via the provision of an electrical signal to the saw blade2, to detect an electrical characteristic, in particular electrical conductivity, of an object which is in contact with the saw blade, in order to determine whether the object is a human body part. The bench saw10,20,30is further designed to carry out the safety function on the basis of the detected electrical conductivity. Concerning the design of the safety function, for example the saw blade2is stopped, in particular braked, and/or the saw blade2is brought into a safety position, in particular into a safety position, at which the saw blade2is located completely below the lay-on surface8 The manner of functioning of such a detection of a contact of the saw blade2with a body part and with a safety function is known e.g. from EP 1 234 285 B1, so that a more detailed description of the manner of functioning is omitted here. The bench saw10,20,30by way of example comprises an actuation section3which by way of example is shown in theFIGS.7and8. The actuation section3is designed to set the position of the saw blade2relative to the lay-on surface8. Expediently, the actuation section3is designed to pivot the saw blade2relative to the lay-on surface8about a pivot axis which runs in the x-direction, about an angle between the cutting plane of the saw blade2and the lay-on surface8. Expediently, the actuation section3is further designed to bring the saw blade2into different positions along a linear movement part, in particular a vertical and/or linear movement path which is pivotable about the aforementioned pivot axis, in order to adjust how far the saw blade2projects upwards out of the opening. In this manner, for example the cutting depth of the bench saw10,20,30can be adjusted. Hereinafter, the saw blade cover40is to be dealt with in detail. The saw blade cover40can be provided as part of one of the bench saws10,20,30which are described here, and/or be provided on its own. Expediently, the saw blade cover40already on its own represents an embodiment. The saw blade cover40is shown on its own inFIG.1. In theFIGS.2and6, the saw blade cover40is shown in a state in which it is fastened to the bench saw10,20,30. The saw blade cover40in particular is designed as a saw blade hood and comprises a cover body41which extends in a longitudinal direction (of the saw blade cover40) and which is manufactured of a first material, for the at least partial covering of the saw blade2of the bench saw10,20,30. The saw blade cover40further comprises a contact region47which is arranged on the cover body41at the face side. The contact region47is bevelled relative to the longitudinal direction of the saw blade cover40, so that a deflection of the cover body41can be effected by the contact of the contact region47with workpiece11which moves towards the saw blade2. The contact region47comprises a sliding section48which is manufactured of a second material and along which the workpiece11can slide on contact with the contact region47. The sliding section48is provided for preventing the workpiece11from cutting into the contact region47on contact with this and from snagging on the contact region47. The second material differs from the first material. Expediently, the second material has a greater hardness than the first material. The first material in particular is plastic and/or the second material in particular is metal, preferably aluminium. The saw blade cover40, in particular the cover body41is preferably designed in an elongate and in particular flat or plate-like manner. By way of example, the cover body41is designed in a sword-like manner. In a designated alignment, the sides of the cover body41which are largest in surface area—in particular the longitudinal sides—are aligned normally to the horizontal direction, in particular normally to the y-direction, as is shown inFIGS.2and6. An opening43, in particular a slot, into which the saw blade2can be at least partly received is present on the lower side of the cover body41. The saw blade2in the state, in which it is at least partly located in the opening43, is covered by way of example by the longitudinal sides, the upper side and/or the front face side of the cover body41. The saw blade cover40comprises a connection section44which by way of example comprises a suction connection45. A suction tube (not shown in the figures) can be connected onto the suction connection45, in order to vacuum dust particles, which arise on sawing the workpiece11. The connection section44comprises a mechanical interface for fastening the saw blade cover40to a fastening section46of the bench saw10,20,30, in particular to the riving knife9. The riving knife9by way of example is arranged behind the saw blade2in the x-direction. The riving knife9by way of example projects out of the opening of the lay-on surface8. Expediently, the cover body41is pivotably mounted on the connection section44about a horizontal axis, in particular an axis which runs in the y-direction. The connection section44is arranged on the rear face side of the saw blade cover40in the longitudinal direction of the saw blade cover40. In the state in which it is fastened to the fastening section46, thus given a designated installation of the saw blade cover40on the bench saw10,20,30, the saw blade cover40is expediently aligned with its longitudinal direction parallel to the x-direction. The contact region47which has already been mentioned above is arranged on the front face side of the saw blade cover40in the longitudinal direction of the saw blade cover40. Expediently, the contact region47represents the front face side of the saw blade cover40, in particular the front face side of the cover body41. The contact region47is bevelled with respect to the longitudinal detection of the saw blade cover40. As is to be seen inFIGS.2and6, the contact region47(in the installed state of the saw blade cover40) is bevelled, thus aligned in an inclined manner, with respect to the x-direction and with respect to the lay-on surface8. In particular, the contact region47is aligned normally to an x-z direction. In an x-z section, the contact region47coming from the bottom runs obliquely upwards to the right, in the direction away from the saw blade2. The cover body41expediently comprises a contact location53which in the installed state of the saw blade cover40preferably forms the deepest point of the saw blade cover40and/or lies on the lay-on surface8(inasmuch as the cover body41is not deflected, in particular lifted, by way of impingement with the workpiece11). The contact region47is expediently arcuate in an x-z section, in particular in a parabola-shaped manner. The contact region47is bevelled in a manner such that when the workpiece11which is located on the lay-on surface and which moves in the x-direction in the direction to the saw blade2hits the contact region47, a force component (in particular a force component upwards) is provided, which effects a deflection, in particular a pivoting, of the cover body41relative to the saw blade2, to the lay-on surface8and/or to the connection section44, in particular a deflection and/or a pivoting upwards. By way of the workpiece11being pressed by the user against the contact region47, the cover body41can be changed in its position, so that the cover body41releases the path to the saw blade2and/or the cutting region of the saw blade2. Expediently, the cover body41herein lifts with its contact location53from the lay-on surface8. During the movement of the workpiece11to the saw blade2, the workpiece11for a part of the path remains in contact with the contact region47and herein slides along the sliding section48. The cover body41is herein pressed further and further upwards by way of the contact of the workpiece11with the contact region47. Before the contact of the workpiece11with the contact region47, the cover body41is expediently situated in a covering position, in which the cover body41at least partly covers the saw blade2and the contact location53expediently lies on the lay-on surface8. If the contact region47is impinged by the workpiece11which moves in the direction of the saw blade2, then the cover body41is deflected further and further until it is situated in the uncovering position, in which the saw blade2is uncovered to a greater extent than in the covering position. Expediently, in the uncovering position the cover body41is deflected to such an extent that the workpiece11can be fed to the cutting region of the saw blade2and can be machined by the saw blade2. The sliding section48is expediently designed in a strip-like, in particular ribbon-like manner. The sliding section48is preferably designed as a metal ribbon, in particular as an aluminium ribbon. Alternatively, the sliding section48can also be manufactured from another hard material which is expediently harder than the first material of the cover body41. The sliding section48, in particular the aluminium ribbon is recessed into the contact region47, in particular contact surface and/or is placed, in particular bonded onto this. The contact region47and the sliding section48are preferably designed in an elongate manner. Expediently, the sliding section48runs in the longitudinal direction of the contact region47. The longitudinal direction of the contact region47and/or of the sliding section48expediently runs diagonally upwards relative to the lay-on surface8, in particular in a direction away from the saw blade2. The contact region47is expediently a face-side contact surface, in particular of the cover body11. The sliding section48is expediently only present in a part-region, in particular in a part region which is central in the horizontal direction, in particular in the y-direction, of the face side contact surface. Expediently, surface sections49, at which the sliding section48is not present, result laterally of the sliding section48in the y-direction. These surface sections49are expediently manufactured of the first material and by way of example assume more than half the y-extension of the contact surface, so that the sliding section48expediently assumes less than half the y-extension. According to an alternative design, the complete contact region, in particular the complete contact surface is manufactured of the second material, so that the complete contact region represents the sliding section. The sliding section48expediently extends over more than half the longitudinal extension of the contact region47. By way of example, the cover body41comprises a projection52which projects obliquely upwards and which is arranged in the region of the front face side of the cover body41, and provides a part of the contact region47. The sliding section48expediently extends into the part of the contact region47which is provided by the projection52. The sliding section48can also be denoted as a reinforcement of the end face of the saw blade cover. The saw blade cover can also be denoted as a protective device. The sliding section48is expediently a strip of a hard material, which is fastened to the protective device such that the end-face—in particular the contact region47—of the protective device can slide with the strip on a contact edge of a cut material of a workpiece which is to be machined. The sliding section48—by way of example the hard material strip—protects the end face of the protective device from jamming (digging/notching in) on the sharp contact edge of the cut material which on leading the cut material slides over the hard material strip into the cut. Alternatively or additionally, as an embodiment, a saw blade cover is further provided, whose cover body as a whole is manufactured of metal, in particular aluminium. This saw blade cover is expediently designed as explained above, with the exception of the aspect that the sliding section is not manufactured of a second material but of the same material as the remaining cover body. The invention further relates to a stop unit50, in particular to an angle stop, for guiding and/or for the feed of a workpiece11to a saw blade2of a bench saw. The stop unit50on its own provides an embodiment and is shown on its own inFIG.3. The stop unit50can also be provided as part of a bench saw.FIG.6shows a corresponding bench saw20which comprises a stop unit50. The stop unit50comprises a stop arm55for the bearing contact of the workpiece11, said stop arm extending in a longitudinal direction54. InFIG.6, the stop arm55is aligned with its longitudinal direction by way of example in the y-direction. The stop arm55comprises a main section51which is manufactured of a first material and which extends over more than half the longitudinal extension of the stop arm55. The feature “first material” which is mentioned here is a different material than the feature “first material” which is mentioned above with reference to the saw blade cover40. The “first material” which is mentioned here with reference to the stop unit50can also be denoted as a “third material” or as a “main section material” for the purpose of a better differentiation. The stop arm55further comprises at least one second end section52which is arranged at an end of the main section51which is situated in the longitudinal direction54. By way of example, the stop arm55on both ends which are located in the longitudinal direction each comprises a second end section52, so that the stop arm comprises two end sections52. The second end section52, given the feed of a workpiece11towards the saw blade2, can come into contact with the saw blade2. In particular, the stop unit50in a state, in which it is mounted on the support structure12of the bench saw20, can be brought into a position, in which the end section52comes into contact with the saw blade2, in particular with the cutting region of the saw blade2. The at least one end section52is manufactured of a second material which differs from the first material. The feature “second material” which is mentioned here is a different material than the feature “second material” which is mentioned above with reference to the saw blade cover40. The “second material” which is mentioned here with reference to the stop unit50can also be denoted as a “fourth material” or as an “end section material” for the purpose of an improved differentiation. The second material differs from the first material. In particular, the second material has a different conductivity than the first material, expediently a lower conductivity. By way of example, the first material is metal and/or the second material is plastic. In particular, the second material has another, preferably lower electrical conductivity than the human body. By way of providing the end section52of the second material, one can prevent the aforementioned safety function of the bench saw20being activated (inasmuch as it is present), given contact of the end section52with the saw blade2. The bench saw20is expediently designed to carry out the safety function, in particular for stopping the saw blade2and/or for bringing the saw blade2into a safety position, on the basis of an electrical characteristic, in particular an electrical conductivity, of an object which is in contact with the saw blade2. The electrical characteristic, in particular the electrical conductivity of the second material is preferably such that the bench saw20does not carry out the safety function given the contact of the saw blade with the end section52. The electrical characteristic, in particular the electrical conductivity of the first material is such that the bench saw20carries out the safety function given a contact of the saw blade with the first material. The stop arm55is expediently designed in a strip-like manner and can also be denoted as a guide strip. The stop arm55is aligned with its side which is largest with regard to surface area—it longitudinal side—normally to a horizontal direction, expediently orthogonally to the lay-on surface8. The main section51of the stop arm55expediently extends over more than 70% of the longitudinal extension of the stop arm55. The main section51is designed in a strip-like manner and is aligned with its side which is largest with regard to surface area orthogonally to the lay-on surface8. The longitudinal sides of the main section51by way of example are rectangular. The main section51is expediently an extrusion profile and in particular is manufactured of metal. An end section52connects onto both ends of the main section51in the longitudinal direction. Each end section expediently assumes less that 20% of the longitudinal extension of the stop arm55. Each end section52expediently continues the transverse-side outer contour of the main section51in the longitudinal direction, so that the stop arm55as a whole results in a strip-like body with a constant thickness. Each end section52expediently has the same thickness as the main section51. Expediently, the face side of each end section52which is located in the longitudinal direction of the stop arm55is bevelled. Each end section52is aligned with its side which is largest with respect to surface area—the longitudinal side—normally to the lay-on surface8. The longitudinal side of each end section52by way of example is triangular due to the bevelling at the respective face side. Each end section52is expediently removably attached to the main section51. By way of example, each end section51is stuck into and/or onto a face side of the main section52. Expediently, each end section52is designed as a plastic cap. Each end section52is expediently fastened to the main section51via a latching connection. By way of example, each end section51comprises a latching element62which engages into a corresponding latching opening of the main section51. Each end section52can be expediently removed from the main section51and be replaced by a new, identically designed end section52. Expediently, each end section52is a wearing part. In particular, a tool-free and/or destruction-free attachment and/or removal of each end section52onto and from the main section51is possible. The stop arm55expediently further comprises a cover layer53which is arranged on a longitudinal side of the stop arm55. The cover layer53in particular is arranged on that longitudinal side of the stop arm55, on which the workpiece11bears given designated use of the stop unit50. Expediently, the cover layer53covers the complete longitudinal side of the stop arm55. The cover layer55can be expediently manufactured of a material which on contact with the saw blade2does not activate the aforementioned safety function, thus in particular has a higher electrical conductivity than the human body and/or the main section51. The stop unit55further comprises a bearing section67with which the stop unit55can be movably mounted on the support structure12of the bench saw20. The bearing section67comprises a roller section56, on which several rollers57,58are arranged, via which rollers the bearing section67can be movably mounted on the support structure12. The roller section56by way of example comprises a horizontal section64which extends in the horizontal direction and on which several, by way of example two vertically extending vertical sections65are arranged. Several rollers57,58, by way of example two rollers per vertical section65are arranged on each vertical section65. Expediently, two of the rollers57,58are aligned to one another in different spatial directions, in particular orthogonally to one another. By way of example, each vertical section65comprises two rollers57,58which are aligned in different spatial directions—by way of example a horizontal roller57, whose roller plane is aligned parallel to a horizontal plane, and a vertical roller58whose roller plane is aligned parallel to a vertical plane. The roller section56expediently further comprises a groove nut63. The groove nut63in particular is designed in an elongate manner and is expediently arranged on the lower side of the horizontal section64. The stop arm55is expediently pivotably mounted relative to the bearing section67about a vertical pivot axis via a pivot bearing60. The pivot bearing60by way of example comprises an angle scale. The stop arm55is further expediently mounted in a linearly movable manner relative to the bearing section67via a linear bearing. By way of example, the stop unit50comprises a connection section59which by way of example is elongate and which preferably extends in a horizontal direction, in particular in a direction which is aligned orthogonally to the longitudinal direction of the horizontal section64. Expediently, the stop arm55is pivotably mounted on the connection section59via the pivot bearing60and the connection section59is linearly movably mounted, in particular in the longitudinal direction of the connection section59, on the bearing section67via the linear bearing. Expediently, one or more operating elements are present, in order to block and/or release the linear bearing, the pivot bearing60and/or the mounting of the bearing section67on the support structure. The mounting of the stop unit55on the support structure12is shown by way of example in theFIGS.5and6. In particular, the support structure12of the bench saw20comprises one or more guide elements6, on which the bearing section67is mounted or can be mounted. By way of example, each guide element6is arranged on the respective peripheral wall5of the support structure12, in particular of the lay-on section1. Each peripheral wall is aligned normally to a horizontal direction. Each guide element6is expediently designed as a rail element which runs in the horizontal direction. By way of example, each guide element6provides a groove, in particular a V-groove, which runs in the horizontal direction and into which the aforementioned groove nut63is inserted on mounting the bearing section67on the support structure12. Each guide element6is expediently arranged offset to the bottom in the z-direction relative to the lay-on surface8. Each guide element6provides a respective linear guide path for the bearing section67. The bearing section67can be removed from a guide element6and be attached onto another guide element6, for example on another peripheral wall5. InFIG.6, the stop unit50is mounted with its bearing section67by way of example on a peripheral wall5which is aligned normally to the y-direction. The stop unit50can be moved in a linearly movable manner in the x-direction along the guide element6which is located on this peripheral wall5. Herein, the stop arm55is located at least partly on or above the lay-on surface8and can be positioned in its x-coordinate, in particular relative to the lay-on section1and/or to the saw blade2, by way of the linear movement of the stop unit50. The stop unit50can be removed from the peripheral wall5which is aligned normally to the y-direction, and be attached to a peripheral wall5which is aligned normally to the x-direction. There, the stop unit50can then be moved in the y-direction, in order to position the stop arm55in its y-coordinate. FIG.6shows a detailed view of the mounting of the bearing section67on the peripheral wall5and on the guide element6of the support structure12. The bearing section67is engaged with the guide element6. By way of example, the groove nut63is inserted into the groove which is provided by the guide element6. The vertical rollers58bear on the guide element6from below. The horizontal rollers57bear on the peripheral wall5. The guiding of the stop unit50is therefore effected in a V-groove which is not (as is conventionally common) arranged in the bench plate—by way of example in the lay-on surface8—but instead on the peripheral wall5. Furthermore, the guidance is effected by way of rollers on the V-groove. Additionally, there are expediently lateral rollers—the horizontal rollers—which counteract a tilting. The rollers are rotatably mounted on the bearing section which can also be denoted as the angle stop frame. The guide arm is attached (by way of example via the connection section) to the frame of the angle stop and lies on the lay-on section which can also be denoted as the saw bench. The guide arm guides the workpiece to be machined. The guide arm is preferably adjustable at an arbitrary angle with respect to the cutting plane in the range of +/−90°. At this angle, it is possible to fasten it to the angle stop frame and to guide and/or push the material into the cutting region of the saw blade at this angle. The rotation angle of the guide arm can be read off at the angle scale which is part of the angle stop. The guide arm (which can also be denoted as a guide strip) comprises protective caps which are stuck on at the ends and which on sawing in prevent an activation of the safety function, in particular an active injury mitigation, AIM function. The AIM function can also be denoted as an active injury reduction function. A bench saw30which provides a further embodiment is hereinafter explained with reference toFIGS.2,7and8. The bench saw30is expediently designed as one of the aforementioned bench saws. The bench saw30comprises the support structure12with the lay-on section1which provides the lay-on surface8for laying on the workpiece11, as well as the saw blade2which engages through the opening of the lay-on section1, so that at least a part of the saw blade2is situated below the lay-on section lower side14. The bench saw30further comprises a housing arrangement80which is located below the lay-on section lower side14and which surrounds the part of the saw blade2which is located below the lay-on section lower side14. The saw blade2is pivotable together with the housing arrangement80relative to the lay-on section1about an angle between a cutting plane of the saw blade2and the lay-on surface8. The bench saw30comprises the cover flap87which covers a gap between the housing arrangement80and the lay-on section lower side14, said gap arising on pivoting the housing arrangement80, in order to thus prevent a user of the bench saw30from being able to grip through the gap to the part of the saw blade2which is located below the lay-on section lower side14. A section of the lay-on section lower side14and of an actuation section3which is located thereon is shown in theFIGS.7and8. As already explained above, the actuation section3serves for pivoting the saw blade about a horizontal pivot axis, in particular a pivot axis which runs in the x-direction. The actuation section3comprises a bearing section81which is arranged on the support structure12, in particular on the lay-on section lower side14, and a pivoting section82which is pivotably mounted relative to the bearing section81about the mentioned pivot axis. The pivot axis by way of example is defined by a guide section94which is present on the bearing section81, in particular by a guide slot, on which the pivoting section82is guided. FIG.7shows the pivoting section82in a normal position, in which the pivoting section82is not pivoted and is aligned orthogonally to the lay-on section lower side14. FIG.8shows the pivoting section82in a pivot position, in which the pivot section82is pivoted and is not aligned orthogonally to the lay-on section lower side14. The pivoting section82by way of example comprises the housing arrangement80, the saw blade2and expediently the drive unit86. The drive unit86and the saw blade2are preferably mounted in a linearly movable manner relative to the housing arrangement80, in order to adjust how far the saw bade2projects out of the opening in the lay-on surface8. The housing arrangement80by way of example comprises a housing shell83, whose side which is largest with regard to surface area, hereinafter also denoted as a lateral side85, is aligned normally to the y-direction in the normal position. The housing shell83further comprises an upper side84which is aligned orthogonally to the lateral side85and which in the normal position of the pivoting section82is aligned parallel to the lay-on section lower side14. The housing shell83further comprises a transverse side which is aligned orthogonally to the lateral side85and orthogonally to the upper side84and which is expediently aligned normally to the x-direction. The cover flap87is expediently manufactured of a rigid material, preferably metal. Expediently, the cover flap87is designed as a cover plate. The cover flap87by way of example comprises a plate-like cover section88. The cover flap87is designed in an elongate manner and with its longitudinal direction is aligned in the x-direction. The cover flap87by way of example is mounted on the housing arrangement80, in particular on the transverse side93, and on the support structure, in particular on the lay-on section lower side14. By way of example, the cover flap87comprises a first fastening section89which is expediently designed as a tab, and is pivotably mounted on the housing shell83, in particular the transverse side93, via the first fastening section89. The cover flap87further comprises a second fastening section91, with which the cover flap87is expediently mounted on the support structure87, in particular a guide slot92, in a linearly movable manner. The second fastening section91is expediently an edge region of the cover flap87. The guide slot92is expediently an intermediate space between the lay-on section lower side14and a further element of the support structure12. Alternatively, the second fastening section91can also be fixedly fastened to the support structure12. By way of example, the cover flap87can be brought from a first position into a second position by way of pivoting the pivoting section82, in particular the housing arrangement80. In particular, the cover flap87is pivoted from the first position into the second position by way of the pivoting section82being pivoted from the normal position into the pivoting position. The cover flap87in the first position is aligned parallel to the lay-on section lower side14. Expediently, the cover flap87in the first position is arranged at least partly between the upper side84and the lay-on section lower side14. In the second position, the cover flap87is aligned in a manner inclined to the lay-on section lower side14about a horizontal axis, in particular the x-axis. Expediently, the cover flap87is mounted on the housing arrangement80and the support structure12in a manner such that the cover flap87is pivoted by way of pivoting the housing arrangement80in opposite directions to the pivoting movement of the housing arrangement80. Expediently, the support structure12comprises several stand legs7, via which the lay-on section1is supportable with respect to the base. Free regions, through which a user can grip below the lay-on section lower side14, are present between the stand legs7. A cover device is consequently provided by the housing arrangement80and the cover flap87and this cover device covers the saw blade2such that the user cannot contact the saw blade2below the bench. The cover device can also be denoted as a lateral saw blade cover. The saw blade2is adjustable in angle and height. The lateral saw blade cover consists of several parts which expediently cover the saw blade in all positions of the adjustable region and prevent the penetration of a finger to the saw blade below the bench. Furthermore, an accessory fastening structure100is provided. The accessory fastening structure100is not equipped inFIG.9and is shown equipped with accessories inFIG.10. The accessory fastening structure100can be provided on its own or as part of a bench saw, in particular one of the bench saws10,20,30which is described above. The accessory fastening structure100is expediently fastened to the lay-on section lower side14and extends vertically downwards from the lay-on section lower side14. The accessory fastening structure100is designed in an essentially plate-like manner and with its plate plane is aligned normally to a horizontal direction, by way of example normally to the y-direction. The accessory fastening structure100comprises a plurality of interfaces, on which accessory elements for the bench saw and which in particular are currently not in use can be fastened. The accessory fastening structure100in particular serves for storing the accessory elements. The accessory elements in the fastened state are expediently freely accessible from the outside and can preferably be removed from the accessory fastening structure100in a tool-free manner. By way of example, the accessory fastening structure100comprises one, more or all of the following interfaces: a first interface101, in particular comprising a latching hook, for fastening a push stick105, a second interface102, comprising in particular a latching hook, for fastening the stop unit50, a third interface103, comprising in particular a latching hook, for fastening a riving knife9, a fourth interface104, in particular comprising a latching hook, for fastening a workcenter, for example a replacement cartridge, in particular for the safety function and/or a fifth interface for fastening the saw blade cover. The accessory fastening structure100serves preferably for the secure storage of the accessory elements on the bench saw. For this, the mentioned interfaces are integrated on the accessory fastening structure100. The interfaces expediently comprise fixation elements. The accessory fastening structure100preferably permits the user to carry out a simple removal of the push stick105from the front side of the bench saw which is aligned normally to the x-direction. The pocket is fastened to the bench plate by way of a screw and a lateral snap closure is present, in order to reduce the horizontal movement. Furthermore, a suction plate130is provided. The suction plate130is not equipped inFIG.11and is shown equipped with accessories inFIG.12. The suction plate130can be provided on its own or as part of a bench saw, in particular one of the aforedescribed bench saws10,20,30. Expediently, the suction plate130is arranged below the lay-on section lower side14and at least partly surrounds the bench saw2. The suction plate130comprises a suction connection131, on which a suction tube for sucking dust which is produced on sawing can be fastened. In an advantageous manner, the suction plate130on its side which is away from the saw blade2comprises a first interface141for a replacement saw blade142as well as further interfaces for fastening accessory elements135, in particular tools for assembling and disassembling the saw blade2,142and the expansion knife and/or riving knife9. Preferably, at least one accessory element135represents a part of the first interface141. Concerning the present suction plate, which can also be denoted as a suction hood, the function of the suction plate is expanded to the extent that the suction plate further serves as a replacement blade holder and serves for storing the tools for assembling and disassembling the saw blade and the expansion knife. | 38,104 |
11858169 | DETAILED DESCRIPTION FIG.1ashows the digital template of a shaped body1in the form of a tooth crown on a support structure2. Subsequently, this digital template is used to construct the shaped body1and the support structure2in layers by means of an additive manufacturing method in which slurry filled with ceramic is location-selectively cured. The tooth crown1is supported by the support structure2on the bumps of the occlusal surface of the tooth crown. The support structure2causes stiffening of the crown1during the production process in which sintering takes place. The support structure2is removed after the production of the crown. FIG.1bshows the tooth crown1after manufacture. Framed here are the regions3and4, which are face away from the arrangement of the support structure2. Comparatively large distortions occur in these regions during the sintering process. FIGS.2aand2bshow a shaped body and a frame which are produced integrally according to a first embodiment of the invention. During the construction process, that is, integrally with the shaped body (tooth crown), the sleeve-like frame6surrounding the shaped body5at a distance was constructed in layers from the building material. The sleeve-like frame6extends in the axial direction7and along its longitudinal axis8. The side wall9of the frame6extends around the axes7and8. The representation inFIG.2ais a plan view of the chewing surface of the tooth crown along the axes7and8.FIG.2bshows the frame6and the crown5arranged therein in a side view. In cross-section, that is, seen in the direction of the longitudinal axis8, the frame has a polygonal shape, here in the form of an octagon. This shape gives the frame6additional stability. The side wall9extends around the circumference of the shaped body5about the axis7. Pin-shaped connections10formed integrally with the frame and the shaped body were constructed during the construction process on the inner wall of the side wall9of the frame6. These are seen in the axial direction7distributed around the shaped body periphery and connect the shaped body to the frame. The connections10serve in particular as distortion avoidance structures and prevent distortion of the shaped body during heat treatment measures. In addition to the connections10, a supporting structure11below the shaped body5is also constructed integrally with the shaped body and the frame6. The construction object, that is, the entirety of frame6, shaped body5, connections10and support structure11, has been constructed during the construction process on a construction platform, not shown, which lies inFIG.2aon the outside of the lower frame wall, on the inside of which the support body11is located. A corresponding method, in which the construction object located in the structure on the construction platform is repeatedly dipped into the slurry and raised again and thus constructed in layers starting from the construction platform, has been described in the introduction with reference to WO 2010/045950 A1. The tensile forces acting on the shaped body5during the construction process are absorbed by the supporting structure11and are effectively absorbed by this and transmitted to the construction platform via the frame. In the axial direction7, the dimension of the shaped body5, that is, the length over which the shaped body5extends in the axial direction7, substantially corresponds to the dimension of the frame6in the axial direction7. The pin-like connections are distributed in groups of three pin-like connections over the circumference of the shaped body5, here substantially at a distance of about 90°. Other angular distances (for example, 45°, 60°, 120°) are also possible. However, it is crucial that the pin-like connections engage at different sections of the circumference of the shaped body in order to ensure an efficient fixation of the shaped body5in the manufacturing process. The thickness of the pin-like connections10is chosen so that they are easy to break, whereby the shaped body5can be released out of the frame6in a simple manner. FIG.3shows a plurality of frames6, which are combined in the manner of an array or in a matrix arrangement in a plane A (Z-Y) and respectively hold a shaped body5of individual shape. This frame arrangement12has a honeycomb structure in the plane A. Adjacent frames6share a common frame wall section13within this honeycomb structure of the plane A. In the X-direction behind the first frame arrangement plane A, a further frame arrangement plane B is constructed, which, like the frame arrangement plane A, consists of a plurality of matrix arrangement frames in honeycomb structure with shaped bodies arranged therein. The planes A and B are spaced apart from each other, which means that a shaped body of the plane A is not directly connected to the shaped body of the plane B located behind. In fact, the two planes A and B are connected to each other only via webs (not shown) integrally formed with the frame arrangement planes A and B, that is, frames lying one behind the other in the X direction are not directly connected to each other. These webs are formed so as to be easily broken, whereby the planes A/B can be easily separated from each other. Plane B is followed by further planes C/D, which are analogous to planes A/B. LIST OF REFERENCE NUMBERS 1tooth crown2support structure3region having special tendency to distortion4region having special tendency to distortion5shaped body6frame7axial direction8longitudinal axis9sidewall of the frame10connections11support structure12frame arrangement13common frame wall sectionA1. frame arrangement planeB2. frame arrangement planeC3. frame arrangement planeD4. frame arrangement plane | 5,723 |
11858170 | DETAILED DESCRIPTION OF THE INVENTION Hereinafter, embodiments for carrying out the present invention will be described with reference to the drawings. The present invention is not limited to each embodiment, and components can be modified and embodied without departing from the spirit of the present invention. Further, various inventions can be formed by appropriately combining a plurality of components disclosed in each embodiment. For example, some components may be removed from all of the components shown in the embodiments. Furthermore, the components of different embodiments may be optionally combined. FIG.1is a perspective view showing a honeycomb structure1produced by a method for producing the honeycomb structure1according to an embodiment of the present invention. The honeycomb structure1shown inFIG.1is a pillar shaped member made of ceramics, and includes: an outer peripheral wall10; and a partition wall11which is arranged on an inner side of the peripheral wall10and define a plurality of cells11aeach extending from one end face to other end face to form a flow path. The pillar shape is understood as a three-dimensional shape having a thickness in an extending direction of the cells11a(axial direction of the honeycomb structure1). A ratio of an axial length of the honeycomb structure1to a diameter or width of the end face of the honeycomb structure1(aspect ratio) is arbitrary. The pillar shape may also include a shape in which the axial length of the honeycomb structure1is shorter than the diameter or width of the end face (flat shape). An outer shape of the honeycomb structure1is not particularly limited as long as it has a pillar shape. For example, it can be other shapes such as a pillar shape having circular end faces (cylindrical shape), a pillar shape having oval end faces, and a pillar shape having polygonal (rectangular, pentagonal, hexagonal, heptagonal, octagonal, etc.) end faces. As for the size of the honeycomb structure1, an area of the end faces is preferably from 2,000 to 20,000 mm2, and even more preferably from 5,000 to 15,000 mm2, in order to increase heat resistance (to suppress cracks generated in the circumferential direction of the outer peripheral wall). A shape of each cell in the cross section perpendicular to the extending direction of the cells11amay preferably be a quadrangle, hexagon, octagon, or a combination thereof. Among these, the quadrangle and the hexagon are preferred. Such a cell shape can lead to a decreased pressure loss when an exhaust gas flows through the honeycomb structure1, which can provide improved purification performance. The quadrangle is particularly preferred from the viewpoint that it is easy to achieve both structural strength and heating uniformity. Each of the partition wall11that define the cells11apreferably has a thickness of from 0.1 to 0.3 mm, and more preferably from 0.15 to 0.25 mm. The thickness of 0.1 mm or more of each partition wall11can suppress a decrease in the strength of the honeycomb structure1. The thickness of each partition wall11of 0.3 mm or less can suppress a larger pressure loss when an exhaust gas flows through the honeycomb structure1if the honeycomb structure1is used as a catalyst support to support a catalyst. In the present invention, the thickness of each partition wall11is defined as a length of a portion passing through the partition wall11, among line segments connecting the centers of gravity of adjacent cells11a,in the cross section perpendicular to the extending direction of the cells11a. The honeycomb structure1preferably has a cell density of from 40 to 150 cells/cm2, and more preferably from 70 to 100 cells/cm2, in the cross section perpendicular to the flow path direction of the cells11a.The cell density in such a range can allow the purification performance of the catalyst to be increased while reducing the pressure loss when the exhaust gas flows. The cell density of 40 cells/cm2or more can allow a catalyst supported area to be sufficiently ensured. The cell density of 150 cells/cm2or less can prevent the pressure loss when the exhaust gas flows through the honeycomb structure1from being increased if the honeycomb structure1is used as a catalyst support to support the catalyst. The cell density is a value obtained by dividing the number of cells by the area of one end face portion of the honeycomb structure1excluding the outer peripheral wall10portion. The provision of the outer peripheral wall10of the honeycomb structure1is useful from the viewpoints of ensuring the structural strength of the honeycomb structure1and suppressing the leakage of a fluid flowing through the cells11afrom the outer perimeter wall10. Specifically, the thickness of the outer peripheral wall10is preferably 0.05 mm or more, and more preferably 0.10 mm or more, and even more preferably 0.15 mm or more. However, if the outer peripheral wall10is too thick, the strength will be too high, and a strength balance between the outer peripheral wall10and the partition wall11will be lost, resulting in a decrease in thermal shock resistance. Therefore, the thickness of the outer peripheral wall10is preferably 1.0 mm or less, and more preferably 0.7 mm or less, and even more preferably 0.5 mm or less. The thickness of the outer peripheral wall10is defined as a thickness of the outer peripheral wall in the normal line direction relative to the tangent line at a measured point when the point of the outer peripheral wall10where the thickness is to be measured is observed in the cross section perpendicular to the extending direction of the cells. The honeycomb structure1is made of ceramics and is preferably electrically conductive. Electric resistivity is not particularly limited as long as the honeycomb structure1is capable of heat generation by Joule heat when a current is applied. Preferably, the electric resistivity is from 0.1 to 200 Ωcm, and more preferably from 1 to 200 Ωcm. As used herein, the electric resistivity of the honeycomb structure1refers to a value measured at 25° C. by the four-terminal method. The honeycomb structure1can be made of a material selected from the group consisting of oxide ceramics such as alumina, mullite, zirconia and cordierite, and non-oxide ceramics such as silicon carbide, silicon nitride and aluminum nitride, although not limited thereto. Further, silicon carbide-metal-silicon composite materials and silicon carbide/graphite composite materials can also be used. Among these, it is preferable that the material of the honeycomb structure1contains ceramics mainly based on a silicon-silicon carbide composite material or silicon carbide, in terms of balancing heat resistance and electrical conductivity. The phrase “the material of the honeycomb structure1is mainly based on silicon-silicon carbide composite material” means that the honeycomb structure1contains 90% by mass of more of silicon-silicon carbide composite material (total mass) based on the total material. Here, the silicon-silicon carbide composite material contains silicon carbide particles as an aggregate and silicon as a binding material to bind the silicon carbide particles, preferably in which a plurality of silicon carbide particles are bound by silicon such that pores are formed between the silicon carbide particles. The phrase “the material of the honeycomb structure1is mainly based on silicon carbide” means that the honeycomb structure1contains 90% or more of silicon carbide (total mass) based on the total material. When the honeycomb structure1contains the silicon-silicon carbide composite material, a ratio of the “mass of silicon as a binding material” contained in the honeycomb structure1to the total of the “mass of silicon carbide particles as an aggregate” contained in the honeycomb structure1and the “mass of silicon as a binding material” contained in the honeycomb structure1is preferably from 10 to 40% by mass, and more preferably from 15 to 35% by mass. The partition wall11may be porous. When the partition wall11is porous, the porosity of the partition walls11is preferably from 35 to 60%, and even more preferably from 35 to 45%. The porosity is a value measured by a mercury porosimeter. The partition wall11of the honeycomb structure1preferably has an average pore diameter of from 2 to 15 μm, and even more preferably from 4 to 8 μm. The average pore diameter is a value measured by a mercury porosimeter. The honeycomb structure1has a slit12that divides the honeycomb structure1in a cross section orthogonal to an axial direction of the honeycomb structure1. The slit12extends in a straight line from one end to the other end of the honeycomb structure1in the radial or width direction of the honeycomb structure1. The slit12also extends in a straight line from one end face to the other end face of the honeycomb structure1in the axial direction of the honeycomb structure1. The slit12is filled with a joining material13. The joining material13is filled in at least a part of a space of the slit12. The joining material13is preferably filled in 50% or more of the space of the slit12, and the joining material13is more preferably filled in the entire space of the slit12. In the embodiment as shown inFIG.1, the joining material13is filled in the entire space of the slit12to form a plane integrated with both end faces of the honeycomb structure1and a curved surface integrated with the outer peripheral wall10of the honeycomb structure1. However, the joining material13may be filled to a position on an axially inner side than the end faces of the honeycomb structure1, or may be filled to a position on an inner side in the radial or width direction than the outer peripheral wall10of the honeycomb structure1. When the main component of the honeycomb structure1is silicon carbide or the metal silicon-silicon carbide composite material, the joining material13preferably contains at least 20% by mass silicon carbide, and more preferable from 20 to 70% by mass of silicon carbide. This can allow a thermal expansion coefficient of the joining material13to be close to that of the honeycomb structure1, thereby improving the thermal shock resistance of the honeycomb structure1. The joining material13may contain 30% by mass or more of silica, alumina, or the like. Although not shown, a pair of electrode layers each extending in the form of band in the flow path direction of the cells11amay be provided on the outer surface of the outer peripheral wall10of the honeycomb structure1, and electrode terminals may be provided on these electrode layers. A voltage can be applied to the honeycomb structure1through those electrode terminals and electrode layers to generate heat in the honeycomb structure1. The electric resistivity of the electrode layers is preferably 1/200 or more and 1/10 or less of that of the honeycomb structure1, in terms of facilitating the flow of electricity to the electrode layers. Each electrode layer may be made of conductive ceramics, a metal, or a composite material (cermet) of a metal and a conductive ceramic. Examples of the metal include a single metal of Cr, Fe, Co, Ni, Si or Ti, or an alloy containing at least one metal selected from the group consisting of those metals. Non-limiting examples of the conductive ceramics include silicon carbide (SiC), and metal compounds such as metal silicides such as tantalum silicide (TaSi2) and chromium silicide (CrSi2). As a method for producing the honeycomb structure1having the electrode layers, first, an electrode layer forming raw material containing ceramic materials is applied onto a side surface of a honeycomb dried body and dried to form a pair of unfired electrode layers on the outer surface of the outer peripheral wall so as to extend in the form of band in the flow path direction of the cells, across the central axis of the honeycomb dried body, thereby providing a honeycomb dried body with unfired electrode layers. Then, the honeycomb dried body with unfired electrode layers is fired to produce a honeycomb fired body having a pair of electrode layers. The honeycomb structure1having the electrode layers is thus obtained. Next,FIG.2is an explanatory view showing a method for producing the honeycomb structure1. The honeycomb structure1inFIG.1can be produced through a first step shown in (a), a second step shown in (b), a third step shown in (c), and a fourth step shown in (d) ofFIG.2. As shown inFIG.2(a), in the first step, the honeycomb structure1being free from slit is prepared, and the slit12is formed leaving at least a part of the outer peripheral wall10or the partition wall11. The slit12can be formed, for example, by cutting the outer peripheral wall10or the partition wall11. In this case, it is preferable to leave at least a part of the peripheral wall10or the partition wall11such that both sides of the honeycomb structure1across the slit12are connected by the peripheral wall10or the partition wall11. In other words, it is preferable that at least a part of the outer peripheral wall10or the partition wall11is left on both sides of the slit12in the radial or width direction or the axial direction. This is to prevent the honeycomb structure1from collapsing after the slit12is formed in the first step. As shown inFIG.2(a), it is more preferable to leave at least a part of the outer peripheral wall10. More particularly, it is more preferable to leave all of the outer peripheral wall10at a position where the slit12is formed (leave the outer peripheral wall10over the entire region in the axial direction at the positions on both sides of the honeycomb structure1in the radial or width direction). This is to prevent the honeycomb structure1from collapsing after the first step by leaving the entire outer peripheral wall10which has relatively high strength. Also, as shown inFIG.2(a), it is preferable to remove all of the partition wall11at the position where the slit12is formed. This is to reduce the workload of the subsequent third step. However, in contrast to the embodiment shown inFIG.2(a), the partition wall11may be left in place of the outer peripheral wall10. When leaving a part of the partition wall11, all of the peripheral wall10at the position where the slit12is formed may be removed, or at least a part of the peripheral wall10at the same position may be further left. The position where a part of the outer peripheral wall10or the partition wall11is left may be only a part in the radial or width direction, or only a part in the axial direction. The second step shown in (b) ofFIG.2is carried out after the first step. In the second step, the slit12formed in the first step is filled with the joining material13. The joining material13can be filled in the slit12by press-fitting using a jig such as a syringe, for example. The third step shown in (c) ofFIG.2is carried out after the second step. In the third step, at least a part of the outer peripheral wall10or the partition wall11left in the first step is removed to obtain the honeycomb structure1having the slit12that divides the honeycomb structure1.FIG.2(c)shows a mode where the outer peripheral wall10left in the first step is removed to form a groove14extending in the axial direction of the honeycomb structure1. The portions of the honeycomb structure1that are divided by the slit12are joined by the outer peripheral wall10, a part of the partition wall11, or the joining material13throughout the first to third steps. This can suppress the joining deviation as compared to the case where at least a part of the outer peripheral wall10or the partition wall11is not left in the first step. The third step is preferably carried out after the joining material13filled in the second step has been dried and solidified. The joining material13may be dried by leaving the honeycomb structure1as it is for a predetermined time after filling the slit12with the joining material13, or by using a drying furnace or other equipment, for example. The fourth step shown in (d) ofFIG.2is carried out after the third step. In the fourth step, the portion where at least a part of the outer peripheral wall10or the partition wall11has been removed in the third step is filled with the joining material13.FIG.2(d)shows a mode where the joining material13is filled in the groove14extending in the axial direction of the honeycomb structure1so as to form a curved surface integrated with the outer peripheral wall10of the honeycomb structure1. The fourth step is not essential, and the production of the honeycomb structure1may be completed in the third step. In the method for producing the honeycomb structure1according the present embodiment, the joining deviation can be suppressed, because at least a part of the outer peripheral wall10or the partition wall11is left to form the slit12, and the slit12is filled with the joining material13, and at least a part of the outer peripheral wall10or the partition wall11left is then removed. This can allow deterioration of the shape of the honeycomb structure1to be suppressed, thereby avoiding a decrease in the canning property or a decrease in the strength of the honeycomb structure1. In the first step, the joining deviation can be more reliably suppressed because at least a part of the outer peripheral wall10is left. After the third step, the portion where at least a part of the outer peripheral wall10or the partition wall11has been removed is filled with the joining material13, so that the entire slit12can be filled with the joining material13, thereby improving the strength of the honeycomb structure1. DESCRIPTION OF REFERENCE NUMERALS 1: honeycomb structure10: outer peripheral wall11: partition wall11a:cell12: slit13: joining material | 17,768 |
11858171 | DETAILED DESCRIPTION Before turning to the figures, which illustrate the exemplary embodiments in detail, it should be understood that the present application is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology is for the purpose of description only and should not be regarded as limiting. According to an exemplary embodiment, a concrete mixing vehicle includes a drum assembly having a mixing drum, a drum drive system, and a drum control system. The drum control system may be configured to control the drum drive system to rotate the mixing drum at a target speed. According to an exemplary embodiment, the drum drive system is a hydraulic drum drive system having two degrees of freedom. By having a drum drive system with a second degree of freedom, the drum drive system facilitates optimizing, balancing, and synchronizing the speed, the torque, and the load of critical components of the drum drive system. The drum drive system of the present disclosure may advantageously minimize energy consumption or waste, reduce noise and emissions, and optimize component working life relative to a single degree of freedom drum drive system. The two degree of freedom drum drive system therefore provides a system that delivers better fuel consumption, optimal system life, and friendlier working environment. While the drum drive system is described herein as a drum drive system for a concrete mixer truck, the drum drive system may be applied to any vehicles having similar accessory drive configurations. A drum drive system typically includes a hydrostatic drive that functions as both the power source and the speed control device for drum drives. Hydrostatic drives may offer fast response, can maintain precise speed under varying loads, and allow continuously variable speed ratio control. A basic hydrostatic drive is a complete closed loop hydraulic circuit containing a pump and a motor. The pump of the hydrostatic drive is typically a reversible variable-displacement pump. The pump may be coupled to and driven by a power-take-off (“PTO”) shaft coupled to an engine of the vehicle. The motor is conventionally a fixed displacement motor. The motor may be coupled to the drum through a ratio reduction gearbox, pulley system, or otherwise coupled thereto. The pump may include a built-in device to adjust the pump displacement and flow direction. The drum assembly may be operable in multiple working modes. The drum may be operated through a wide speed range, from lower than 1 revolution-per-minute (“rpm”) in a transportation mode (e.g., while the vehicle is moving, etc.) to above 18 rpm in a loading mode and/or a mixing mode. While in a discharging mode, it may be desirable to have the lowest possible drum speed to achieve accurate discharging. The mixing mode of the drum may require the hydrostatic drive to provide a speed range over 20:1 (e.g., the highest speed of the drum divided by the lowest speed of the drum, etc.). The max speed range of a standard hydrostatic pump is about 10:1 due to maximum pump displacement, pressure limit, and/or torque limit thereof. A fixed displacement motor has a fixed speed and therefore the speed range thereof is fixed (e.g., 1:1, etc.) based on the pump output provided thereto. Therefore, the engine has to run over its full speed range (approximately 3:1) to meet application requirements for the mixing mode. In the loading mode and/or the mixing mode, the engine speed will typically run at the high idle (up to maximum governed speed). In the discharging mode, the engine may run near the low idle or independent of drum operation if the vehicle is being driven. The limited speed ratio range of a typical hydrostatic drive presents severe drawbacks in concrete mixing. Mixer vehicle have engines that are sized mainly for acceleration and climbing the most severe uphill grades at maximum load. In concrete mixing operations, the required power is typically about one third of the engine capacity. Running at high idle results in poor fuel efficiency. Other than unnecessary fuel consumption, more emissions, more noise, and reduced engine life are all byproducts. Another issue is the accuracy of concrete discharging. Some applications prefer slow and accurate discharging rate. The engine may thereby be run at low idle to provide a flow discharge rate of mixture from the drum. However, the engine torque capacity becomes very weak at low idle and any load change causes engine speed fluctuations, which negatively affects the discharging accuracy. According to an exemplary embodiment, the drum drive system of the present disclosure replaces the conventional fixed displacement motor with a variable displacement motor. The variable displacement motor may provide a speed range of 3:1 or 4:1. The speed range of the drum drive system is a product of the pump speed range multiplied by the motor speed range. With a fixed displacement motor, the speed range of the drum drive system is the speed ratio of the variable pump, typically around 10:1. The drum drive system with the variable displacement motor may have a speed range that reaches up to 30:1 or 40:1. The increased speed range of the drum drive system having a variable displacement motor relative to a drum drive system having a fixed displacement motor frees up boundary limits for the engine, the pump, and the motor. Advantageously, with the increased capacity of the drum drive system, the engine does not have to run at either high idle or low idle, but rather may operate at a speed that provides the most fuel efficiency and most stable torque. Also, the pump and the motor do not have to go to displacement extremes to meet the speed requirements of various applications, but can rather be modulated to the most efficient working conditions. The drum drive system of the present disclosure may provide a desirable maximum overall drive ratio relative to traditional arrangements. The maximum overall drive ratio may be the ratio of the engine speed to the drum speed and may vary based on the maximum pump displacement, the minimum motor displacement, and/or the gearbox ratio. The maximum overall drive ratio may be limited in conventional systems to prevent drum over speed at elevated (e.g., the highest possible, etc.) engine speed. In conventional systems, the maximum overall drive ratio may be 120:1 (e.g., an engine speed of 2,100 rpm may provide a drum speed of 18 rpm at full pump displacement, etc.). The motor of the present disclosure may have a 3:1-4:1 reduction (e.g., at 100% to 33-25% displacement, etc. 2) and facilitate providing an overall drive ratio of 30:1-40:1. The motor of the present disclosure may facilitate providing maximum drum speed at or near engine idle speed. In one embodiment, traditional engine idle speed variation is displaced by motor displacement variation. Since the early application of hydrostatic drives, manual pump adjustment has been the main method of drum speed control. Adding the control and adjustment of the variable displacement motor not only doubles the operator demands, it also introduces risks of over speeding the motor, over speeding the drum, and over pressurizing the system. It is beyond a capacity of a human operator and a traditional mixer vehicle control system to control pump displacement, motor displacement, and engine speed at the same time and guarantee the pressures and speeds are all within target operating ranges. The drum control system of the present disclosure is more sophisticated relative to those of traditional mixer vehicles (e.g., those having fixed-displacement motors, etc.). The drum control system may be configured to control pump displacement and motor displacement while continuously electronically controlling the engine speed (e.g., when the concrete mixing vehicle is not being driven, etc.). To facilitate such control, the drum control system is configured to monitor the working pressure on both sides of the motor and the pump, motor speed (i.e., which is proportional to the drum speed), engine speed, engine torque, and/or percent load. By way of example, when the operator specifies a desired drum speed, the drum control system may be configured to regulate the engine speed, pump displacement, and motor displacement. The drum control system may be configured to maintain the engine speed at the lowest required level while controlling pump displacement and motor displacement to provide the required power demand to operate the drum at the desired drum speed. In other embodiments, engine speed is not varied. In still other embodiments, the drum control system reduces the risk of system over pressure and/or drum over speed, improving fuel economy by using lower engine speeds, in response to independently controlled pump displacement and/or flow and independently controlled engine speed. Such independent pump control may be facilitated by way of a manual cable control, a manual analog control, or a manual electronic control. Such independent drum speed control may be facilitated by way of a PTO speed controller. According to the exemplary embodiment shown inFIGS.1-4, a vehicle, shown as concrete mixing truck10, includes a drum assembly, shown as drum assembly100. According to an exemplary embodiment, the concrete mixing truck10is configured as a rear-discharge concrete mixing truck. In other embodiments, the concrete mixing truck10is configured as a front-discharge concrete mixing truck. As shown inFIG.1, the concrete mixing truck10includes a chassis, shown as frame12, and a cab, shown as cab14, coupled to the frame12(e.g., at a front end thereof, etc.). The drum assembly100is coupled to the frame12and disposed behind the cab14(e.g., at a rear end thereof, etc.), according to the exemplary embodiment shown inFIG.1. In other embodiments, at least a portion of the drum assembly100extends in front of the cab14. The cab14may include various components to facilitate operation of the concrete mixing truck10by an operator (e.g., a seat, a steering wheel, hydraulic controls, a user interface, switches, buttons, dials, etc.). As shown inFIGS.1,3, and4, the concrete mixing truck10includes a prime mover, shown as engine16. As shown inFIG.1, the engine16is coupled to the frame12at a position beneath the cab14. The engine16may be configured to utilize one or more of a variety of fuels (e.g., gasoline, diesel, bio-diesel, ethanol, natural gas, etc.), according to various exemplary embodiments. According to an alternative embodiment, the engine16additionally or alternatively includes one or more electric motors coupled to the frame12(e.g., a hybrid vehicle, an electric vehicle, etc.). The electric motors may consume electrical power from an on-board storage device (e.g., batteries, ultra-capacitors, etc.), from an on-board generator (e.g., an internal combustion engine, etc.), and/or from an external power source (e.g., overhead power lines, etc.) and provide power to systems of the concrete mixing truck10. As shown inFIGS.1and4, the concrete mixing truck10includes a power transfer device, shown as transmission18. In one embodiment, the engine16produces mechanical power (e.g., due to a combustion reaction, etc.) that flows into the transmission18. As shown inFIGS.1and4, the concrete mixing truck10includes a first drive system, shown as vehicle drive system20, that is coupled to the transmission18. The vehicle drive system20may include drive shafts, differentials, and other components coupling the transmission18with a ground surface to move the concrete mixing truck10. As shown inFIG.1, the concrete mixing truck10includes a plurality of tractive elements, shown as wheels22, that engage a ground surface to move the concrete mixing truck10. In one embodiment, at least a portion of the mechanical power produced by the engine16flows through the transmission18and into the vehicle drive system20to power at least a portion of the wheels22(e.g., front wheels, rear wheels, etc.). In one embodiment, energy (e.g., mechanical energy, etc.) flows along a first power path defined from the engine16, through the transmission18, and to the vehicle drive system20. As shown inFIGS.1-3, the drum assembly100of the concrete mixing truck10includes a drum, shown as mixing drum102. The mixing drum102is coupled to the frame12and disposed behind the cab14(e.g., at a rear and/or middle of the frame12, etc.). As shown inFIGS.1-4, the drum assembly100includes a second drive system, shown as drum drive system120, that is coupled to the frame12. As shown inFIGS.1and2, the concrete mixing truck10includes a first support, shown as front pedestal106, and a second support, shown as rear pedestal108. According to an exemplary embodiment, the front pedestal106and the rear pedestal108cooperatively couple (e.g., attach, secure, etc.) the mixing drum102to the frame12and facilitate rotation of the mixing drum102relative to the frame12. In an alternative embodiment, the drum assembly100is configured as a stand-alone mixing drum that is not coupled (e.g., fixed, attached, etc.) to a vehicle. In such an embodiment, the drum assembly100may be mounted to a stand-alone frame. The stand-alone frame may be a chassis including wheels that assist with the positioning of the stand-alone mixing drum on a worksite. Such a stand-alone mixing drum may also be detachably coupled to and/or capable of being loaded onto a vehicle such that the stand-alone mixing drum may be transported by the vehicle. As shown inFIGS.1and2, the mixing drum102defines a central, longitudinal axis, shown as axis104. According to an exemplary embodiment, the drum drive system120is configured to selectively rotate the mixing drum102about the axis104. As shown inFIGS.1and2, the axis104is angled relative to the frame12such that the axis104intersects with the frame12. According to an exemplary embodiment, the axis104is elevated from the frame12at an angle in the range of five degrees to twenty degrees. In other embodiments, the axis104is elevated by less than five degrees (e.g., four degrees, three degrees, etc.) or greater than twenty degrees (e.g., twenty-five degrees, thirty degrees, etc.). In an alternative embodiment, the concrete mixing truck10includes an actuator positioned to facilitate selectively adjusting the axis104to a desired or target angle (e.g., manually in response to an operator input/command, automatically according to a control scheme, etc.). As shown inFIGS.1and2, the mixing drum102of the drum assembly100includes an inlet, shown as hopper110, and an outlet, shown as chute112. According to an exemplary embodiment, the mixing drum102is configured to receive a mixture, such as a concrete mixture (e.g., cementitious material, aggregate, sand, etc.), with the hopper110. As shown inFIGS.1and2, the mixing drum102includes a port, shown as injection port130. The injection port130may provide access into the interior of the mixing drum102to inject water and/or chemicals (e.g., air entrainers, water reducers, set retarders, set accelerators, superplasticizers, corrosion inhibitors, coloring, calcium chloride, minerals, and/or other concrete additives, etc.). According to an exemplary embodiment, the injection port130includes an injection valve that facilitates injecting the water and/or the chemicals from a fluid reservoir (e.g., a water tank, etc.) into the mixing drum102to interact with the mixture, while preventing the mixture within the mixing drum102from exiting the mixing drum102through the injection port130. In some embodiments, the mixing drum102includes multiple injection ports130(e.g., two injection ports, three injection ports, etc.) configured to facilitate independently injecting different water and/or chemicals into the mixing drum102. The mixing drum102may include a mixing element (e.g., fins, etc.) positioned within the interior thereof. The mixing element may be configured to (i) agitate the contents of mixture within the mixing drum102when the mixing drum102is rotated by the drum drive system120in a first direction (e.g., counterclockwise, clockwise, etc.) and (ii) drive the mixture within the mixing drum102out through the chute112when the mixing drum102is rotated by the drum drive system120in an opposing second direction (e.g., clockwise, counterclockwise, etc.). As shown inFIGS.2-4, the drum drive system120includes a pump, shown as pump122; a reservoir, shown as fluid reservoir124, fluidly coupled to the pump122; and an actuator, shown as drum motor126. As shown inFIGS.3and4, the pump122and the drum motor126are fluidly coupled. According to an exemplary embodiment, the drum motor126is a hydraulic motor, the fluid reservoir124is a hydraulic fluid reservoir, and the pump122is a hydraulic pump. The pump122may be configured to pump fluid (e.g., hydraulic fluid, etc.) stored within the fluid reservoir124to drive the drum motor126. According to an exemplary embodiment, the pump122is a variable displacement hydraulic pump (e.g., an axial piston pump, etc.) and has a pump stroke that is variable. The pump122may be configured to provide hydraulic fluid at a flow rate that varies based on the pump stroke (e.g., the greater the pump stroke, the greater the flow rate provided to the drum motor126, etc.). The pressure of the hydraulic fluid provided by the pump122may also increase in response to an increase in pump stroke (e.g., where pressure may be directly related to work load, higher flow may result in higher pressure, etc.). The pressure of the hydraulic fluid provided by the pump122may alternatively not increase in response to an increase in pump stroke (e.g., in instances where there is little or no work load, etc.). The pump122may include a throttling element (e.g., a swash plate, etc.). The pump stroke of the pump122may vary based on the orientation of the throttling element. In one embodiment, the pump stroke of the pump122varies based on an angle of the throttling element (e.g., relative to an axis along which the pistons move within the axial piston pump, etc.). By way of example, the pump stroke may be zero where the angle of the throttling element equal to zero. The pump stroke may increase as the angle of the throttling element increases. According to an exemplary embodiment, the variable pump stroke of the pump122provides a variable speed range of up to about 10:1. In other embodiments, the pump122is configured to provide a different speed range (e.g., greater than 10:1, less than 10:1, etc.). In one embodiment, the throttling element of the pump122is movable between a stroked position (e.g., a maximum stroke position, a partially stroked position, etc.) and a destroked position (e.g., a minimum stoke position, a partially destroked position, etc.). According to an exemplary embodiment, an actuator is coupled to the throttling element of the pump122. The actuator may be positioned to move the throttling element between the stroked position and the destroked position. In some embodiments, the pump122is configured to provide no flow, with the throttling element in a non-stroked position, in a default condition (e.g., in response to not receiving a stroke command, etc.). The throttling element may be biased into the non-stroked position. In some embodiments, the drum control system150is configured to provide a first command signal. In response to receiving the first command signal, the pump122(e.g., the throttling element by the actuator thereof, etc.) may be selectively reconfigured into a first stroke position (e.g., stroke in one direction, a destroked position, etc.). In some embodiments, the drum control system150is configured to additionally or alternatively provide a second command signal. In response to receiving the second command signal, the pump122(e.g., the throttling element by the actuator thereof, etc.) may be selectively reconfigured into a second stroke position (e.g., stroke in an opposing second direction, a stroked position, etc.). The pump stroke may be related to the position of the throttling element and/or the actuator. According to another exemplary embodiment, a valve is positioned to facilitate movement of the throttling element between the stroked position and the destroked position. In one embodiment, the valve includes a resilient member (e.g., a spring, etc.) configured to bias the throttling element in the destroked position (e.g., by biasing movable elements of the valve into positions where a hydraulic circuit actuates the throttling element into the destroked positions, etc.). Pressure from fluid flowing through the pump122may overcome the resilient member to actuate the throttling element into the stroked position (e.g., by actuating movable elements of the valve into positions where a hydraulic circuit actuates the throttling element into the stroked position, etc.). As shown inFIG.4, the concrete mixing truck10includes a power takeoff unit, shown as power takeoff unit32, that is coupled to the transmission18. In another embodiment, the power takeoff unit32is coupled directly to the engine16. In one embodiment, the transmission18and the power takeoff unit32include mating gears that are in meshing engagement. A portion of the energy provided to the transmission18flows through the mating gears and into the power takeoff unit32, according to an exemplary embodiment. In one embodiment, the mating gears have the same effective diameter. In other embodiments, at least one of the mating gears has a larger diameter, thereby providing a gear reduction or a torque multiplication and increasing or decreasing the gear speed. As shown inFIG.4, the power takeoff unit32is selectively coupled to the pump122with a clutch34. In other embodiments, the power takeoff unit32is directly coupled to the pump122(e.g., without clutch34, etc.). In some embodiments, the concrete mixing truck10does not include the clutch34. By way of example, the power takeoff unit32may be directly coupled to the pump122(e.g., a direct configuration, a non-clutched configuration, etc.). According to an alternative embodiment, the power takeoff unit32includes the clutch34(e.g., a hot shift PTO, etc.). In one embodiment, the clutch34includes a plurality of clutch discs. When the clutch34is engaged, an actuator forces the plurality of clutch discs into contact with one another, which couples an output of the transmission18with the pump122. In one embodiment, the actuator includes a solenoid that is electronically actuated according to a clutch control strategy. When the clutch34is disengaged, the pump122is not coupled to (i.e., is isolated from) the output of the transmission18. Relative movement between the clutch discs or movement between the clutch discs and another component of the power takeoff unit32may be used to decouple the pump122from the transmission18. In one embodiment, energy flows along a second power path defined from the engine16, through the transmission18and the power takeoff unit32, and into the pump122when the clutch34is engaged. When the clutch34is disengaged, energy flows from the engine16, through the transmission18, and into the power takeoff unit32. The clutch34selectively couples the pump122to the engine16, according to an exemplary embodiment. In one embodiment, energy along the first flow path is used to drive the wheels22of the concrete mixing truck10, and energy along the second flow path is used to operate the drum drive system120(e.g., power the pump122, etc.). By way of example, the clutch34may be engaged such that energy flows along the second flow path when the pump122is used to provide hydraulic fluid to the drum motor126. When the pump122is not used to drive the mixing drum102(e.g., when the mixing drum102is empty, etc.), the clutch34may be selectively disengaged, thereby conserving energy. In embodiments without clutch34, the mixing drum102may continue turning (e.g., at low speed) when empty. The drum motor126is positioned to drive the rotation of the mixing drum102. According to an exemplary embodiment, the drum motor126is a variable displacement motor. In one embodiment, the drum motor126operates within a variable speed range up to about 3:1 or 4:1. In other embodiments, the drum motor126is configured to provide a different speed range (e.g., greater than 4:1, less than 3:1, etc.). According to an exemplary embodiment, the speed range of the drum drive system120is the product of the speed range of the pump122and the speed range of the drum motor126. The drum drive system120having the pump122and the drum motor126may thereby have a speed range that reaches up to 30:1 or 40:1 (e.g., without having to operate the engine16at a high idle condition, etc.). According to an exemplary embodiment, increased speed range of the drum drive system120having a variable displacement motor and a variable displacement pump relative to a drum drive system having a fixed displacement motor frees up boundary limits for the engine16, the pump122, and the drum motor126. Advantageously, with the increased capacity of the drum drive system120, the engine16does not have to run at either high idle or low idle during the various operating modes of the drum assembly100(e.g., mixing mode, discharging mode, filling mode, etc.), but rather the engine16may be operated at a speed that provides the most fuel efficiency and most stable torque. Also, the pump122and the drum motor126may not have to be operated at displacement extremes to meet the speed requirements for the mixing drum102during various applications, but can rather be modulated to the most efficient working conditions (e.g., by the drum control system150, etc.). As shown inFIG.2, the drum drive system120includes a drive mechanism, shown as drum drive wheel128, coupled to the mixing drum102. The drum drive wheel128may be welded, bolted, or otherwise secured to the head of the mixing drum102. The center of the drum drive wheel128may be positioned along the axis104such that the drum drive wheel128rotates about the axis104. According to an exemplary embodiment, the drum motor126is coupled to the drum drive wheel128(e.g., with a belt, a chain, a gearing arrangement, etc.) to facilitate driving the drum drive wheel128and thereby rotate the mixing drum102. The drum drive wheel128may be or include a sprocket, a cogged wheel, a grooved wheel, a smooth-sided wheel, a sheave, a pulley, or still another member. In other embodiments, the drum drive system120does not include the drum drive wheel128. By way of example, the drum drive system120may include a gearbox that couples the drum motor126to the mixing drum102. By way of another example, the drum motor126(e.g., an output thereof, etc.) may be directly coupled to the mixing drum102(e.g., along the axis104, etc.) to rotate the mixing drum102. According to an exemplary embodiment, the speed of the mixing drum102is directly proportional to the speed of the drum motor126(e.g., based on gearing, pulley, etc. arrangement between the drum motor126and the drum drive wheel128, etc.). The speed of the mixing drum102may be represented by following expression: Nd∝Nm=QDspm(1) where Ndis the speed of the mixing drum102, Nmis the speed of the drum motor126, Q is the hydraulic fluid flow provided to the drum motor126by the pump122, and Dspmis the displacement of the drum motor126. In a drum drive system where the drum actuator is a fixed displacement motor, the motor displacement is a constant and the speed of the drum motor126, and thereby the speed of the mixing drum102, is based solely on the hydraulic fluid flow provided by the pump122. Advantageously, the drum drive system120of the present disclosure includes a variable displacement drum motor126such that the speed of the mixing drum102is based on the hydraulic fluid flow provided by the pump122and the displacement of the drum motor126. The hydraulic fluid flow provided by the pump122to the drum motor126may be represented by the following expression: Q=Np·Dspp(2) where Npis the speed of the pump122and Dsppis the displacement of the pump122. Since the pump122is driven by the engine16with the power takeoff unit32, the speed of the pump122is proportional to the speed of the engine16(e.g., approximately a 1:1 ratio, etc.), and thereby the hydraulic fluid flow is proportional to the speed of the engine16. A pump with higher displacement will provide more flow. However, increasing the displacement of a pump increases the size, weight, and cost thereof. Larger pumps also have a much lower allowable working speed because of the eccentric force from the increase in mass. Typically, the smallest pump to meet the work requirement is selected and the engine is typically operated at the high idle when high drum speed is needed. However, this leads to various disadvantageous such unnecessary fuel consumption, more emissions, increased noise, reduced engine life, etc. The drum motor126having variable displacement alleviates the aforementioned disadvantages of a drum drive system having a fixed displacement motor. According to an exemplary embodiment, the drum motor126has a torque capacity that is capable of meeting the most severe work load experienced by the drum assembly100. The torque capacity of the drum motor126may be represented by the following expression: Tm=Dspm·PQ(3) where Tmis the torque of the drum motor126and PQis the pressure of the hydraulic fluid flow provided to the drum motor126by the pump122. A similar expression may be used to represent the torque capacity of the pump122. The pump122and the drum motor126may have a threshold working pressure (e.g., 5000 pounds-per-square-inch (“psi”), etc.). The energy required to operate the mixing drum102at a certain speed may be represented by the following expression: HP=Nm·Tm=PQ·Q(4) where HP is the horsepower of the drum drive system120. The most severe workloads appear when the mixing drum102is in acceleration, braking, and/or discharging (e.g., where the speed of the mixing drum102is in low to medium range, etc.). In a loading mode or a mixing mode, the speed of the mixing drum102is high but stable. The torque required for the loading and mixing modes is typically less than half of the most severe loads. During low speed and high torque conditions, the drum motor126may be configured to operate in a large displacement setting to provide the required torque. In a high speed but relative stable torque condition, the drum motor126may be configured to operate at a reduced displacement so as to require less flow for the same rotating speed. Then, the speed of the pump122, and thereby the speed of the engine16may be reduced. By way of example, during an initial stage of operation, the drum motor126may be operated at 100% displacement and the system pressure may be at 2000 psi. The pump122may also be operated at 100% displacement. The engine may be operated at a high idle speed of 2000 rpm. Now, if the displacement of the drum motor126is reduced to 50% of the maximum amount of displacement, only half of the original hydraulic flow is needed to maintain the same motor speed, based on Equation (1). However, because the mixing drum102is still running with the same load at the same speed, the horsepower consumption will not change. From Equation (4), the system pressure will double with the same horsepower consumption and half the hydraulic fluid flow. Therefore, the system pressure will increase to 4000 psi from the original 2000 psi. Further, now that half of the original amount of hydraulic fluid flow is required, the pump122may be operated at half of the original speed thereof with the full displacement setting, based on Equation (2). As a result, the engine16may be operated at half of the high idle speed (e.g., 1000 rpm instead of 2000 rpm, etc.) since the speed of the pump122is proportional to the speed of the engine16. Therefore, the drum drive system120is capable of providing the same horsepower output while at significantly lower engine speeds, which provides much better fuel efficiency, less emissions, decreased operational noise, increased engine life, etc. By way of another example, concretes may not always be low slump heavy materials. With high slump light concrete, the drum work load can be much lighter. The system pressure may only be at 1500 psi with the drum motor126at full displacement. The motor displacement can be further decreased to less than 50%, for example 40%. The system pressure may only be 3750 psi (e.g., which is less than the maximum allowable system pressure, etc.). Then, the engine16may be operated at a low idle speed (e.g., 800 rpm, etc.). According to the exemplary embodiment shown inFIG.3, the drum control system150for the drum assembly100of the concrete mixing truck10includes a controller, shown as drum assembly controller152. In one embodiment, the drum assembly controller152is configured to selectively engage, selectively disengage, control, and/or otherwise communicate with components of the drum assembly100and/or the concrete mixing truck10(e.g., actively control the components thereof, etc.). As shown inFIG.3, the drum assembly controller152is coupled to the engine16, the pump122, the drum motor126, a first pressure sensor154, a second pressure sensor156, and a speed sensor158. The pump122is coupled to the engine16(e.g., by way of a PTO connection on a transmission of the concrete mixing truck10, etc.). In other embodiments, the drum assembly controller152is coupled to more or fewer components. The drum assembly controller152may be configured to regulate the speed of the engine16, the displacement of the pump122, and/or the displacement of the drum motor126to provide a target speed (e.g., received from an operator, etc.) of the mixing drum102. By way of example, the drum assembly controller152may send and receive signals with the engine16, the pump122, the drum motor126, the first pressure sensor154, the second pressure sensor156, and/or the speed sensor158. The drum assembly controller152may be implemented as hydraulic controls, a general-purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a digital-signal-processor (DSP), circuits containing one or more processing components, circuitry for supporting a microprocessor, a group of processing components, or other suitable electronic processing components. According to an exemplary embodiment, the drum assembly controller152includes a processing circuit having a processor and a memory. The processing circuit may include an ASIC, one or more FPGAs, a DSP, circuits containing one or more processing components, circuitry for supporting a microprocessor, a group of processing components, or other suitable electronic processing components. In some embodiments, the processor is configured to execute computer code stored in the memory to facilitate the activities described herein. The memory may be any volatile or non-volatile computer-readable storage medium capable of storing data or computer code relating to the activities described herein. According to an exemplary embodiment, the memory includes computer code modules (e.g., executable code, object code, source code, script code, machine code, etc.) configured for execution by the processor. According to an exemplary embodiment, the drum assembly controller152is configured to regulate the engine speed, the pump displacement, and the motor displacement to provide a target drum speed, while maintaining the engine speed at a lowest possible level while using pump and motor displacement changes to achieve the target hydraulic fluid flow and hydraulic power demand. The control of the drum speed can be achieved by using a target drum speed error to calculate the pump and motor displacement changes with minor or no changes to the engine speed through a proportional-integral-derivative (“PID”) based control strategy. A look-up table based or gain-scheduling, or other forms of control strategies can also be used to adjust the pump and motor displacement independently. In some embodiments, the drum assembly controller152is configured to operate the engine16at the lowest possible engine speed, the pump122at the lowest possible pump displacement, and the drum motor126at the highest possible motor displacement to achieve the target drum speed within constraints such as maximum hydraulic pressure, maximum engine torque/load, and maximum drum speed. To facilitate such control, the drum control system is configured to monitor (i) the working pressure of the hydraulic fluid flow on both sides of the drum motor126and the pump122with the first pressure sensor154and the second pressure sensor156, (ii) the speed of the drum motor126with the speed sensor158(i.e., which is proportional to the speed of the mixing drum102), (iii) the speed of the engine16, (iv) the torque of the engine16, and/or (v) a percent load on the drum drive system120. Further details regarding a control strategy implemented by the drum assembly controller152are provided herein in relation to method500. In other embodiments, the drum control system150does not employ pressure feedback control (e.g., employs open loop control, controls based on other feedbacks, hydraulic components with higher pressure operating conditions are employed, etc.). In still other embodiments, the drum control system150is configured to adjust the displacement of the drum motor126in response to at least one of (i) a torque of the engine16, (ii) a load on the engine16, and (iii) a power of the engine16. In yet other embodiments, the drum control system is configured to adjust the displacement of the drum motor126in response to at least one of (i) a torque of the engine16and (ii) a torque of the pump122and a displacement of the pump122. Referring now toFIG.5, a method500for controlling a drum drive system having a variable displacement pump and a variable displacement motor to provide a target drum speed by modulating engine speed, pump displacement, and motor displacement, is shown according to an exemplary embodiment. The method may include maintaining the engine speed at the lowest required level while actively controlling pump displacement and motor displacement to provide the required power demand to operate the drum at the target drum speed. At step502, a control system (e.g., the drum control system150, the drum assembly controller152, etc.) is configured to receive and monitor pressure data indicative of a system pressure (e.g., pressure of the hydraulic fluid flow, etc.) within a drum drive system (e.g., the drum drive system120, etc.) from at least one pressure sensor (e.g., the first pressure sensor154, the second pressure sensor156, etc.). At step504, the control system is configured to determine whether the system pressure is less than a maximum or threshold pressure (e.g., 5000 psi, etc.) for the drum drive system. If the system pressure is less than the maximum or threshold pressure (e.g., by more than a threshold difference, etc.), the control system is configured to proceed to step506. At step506, the control system is configured to reduce a displacement of a variable displacement motor (e.g., the drum motor126, etc.) of the drum drive system in response to the system pressure being less than the maximum or threshold pressure. At step508, the control system is configured to reduce a speed of an engine (e.g., the engine16, etc.) coupled to a pump (e.g., the pump122, etc.) of the drum drive system based on the reduction in displacement of the variable displacement motor (e.g., if the speed of the engine is not at idle, unless the transmission of the vehicle is in drive and is then independently controlled based on vehicle driving needs, etc.). The control system may then return to step502to further reduce the speed of the engine, if possible. If the system pressure is not less than a maximum or threshold pressure (e.g., 5,000 psi, etc.) for the drum drive system, the control system is configured to determine, at step510, whether the system pressure is at or near the maximum or threshold pressure for the drum drive system. If the system pressure is at or near the maximum or threshold pressure for the drum drive system, the control system is configured to increase a displacement of a variable displacement motor of the drum drive system at step512and increase a speed of an engine coupled to a pump of the drum drive system based on the increase in displacement of the variable displacement motor at step514and thereafter return to step502. According to an exemplary embodiment, reducing the displacement of the variable displacement motor will generate a higher system pressure. By way of example, reducing the displacement of the variable displacement motor requires less fluid flow to maintain the same speed of the variable displacement motor, and thereby maintain the speed of the drum (e.g., see Equation (1), etc.). However, because the drum needs to continue running with the same load at the same speed, the horsepower consumption to drive the drum does not change. With the same horsepower consumption and a reduced fluid flow, the system pressure will increase (e.g., see Equation (4), etc.). With the fluid flow reduced, the pump may be operated by the control system at a reduced speed while maintaining the current displacement setting thereof (e.g., see Equation (2), etc.). Since the speed of the pump is proportional to the speed of the engine, the control system may operate the engine at a reduced speed. Therefore, control system is configured to control the engine, the pump, and the motor to provide the same horsepower output and drum speed while at significantly lower engine speeds, which may provide increased fuel efficiency, reduced emissions, decreased operational noise, increased engine life, etc. In still another embodiment, the pump displacement, the motor displacement, and the engine speed are controlled as shown inFIG.6.FIG.7shows one exemplary traditional control scheme for comparison purposes. According to one embodiment, an increasing drum speed command (e.g., for a drum drive system having a fixed displacement motor, etc.) for one or more target drum speeds, for a decreasing drive ratio, etc.) includes several stages of control: (i) the pump displacement is increased and/or set to achieve a target drum speed at a current engine speed (e.g., a speed that may be uncontrolled such as when driving, etc.) and maximum motor displacement, (ii) the variable motor displacement is decreased to achieve the target drum speed at the maximum pump displacement and the current engine speed, and (iii) increase engine speed to achieve the target drum speed at maximum pump displacement and a minimum motor displacement. The third control state may be executed only when possible (e.g., the vehicle is not driving, the transmission18thereof is in neutral such that the vehicle drive system20is not being driven, etc.). As shown inFIGS.6and7, the graphs600and700include exemplary plots for (i) the relationship between pump displacement versus drum speed (610and710), (ii) the relationship between motor displacement versus drum speed (620and720), and (iii) the relationship between engine speed versus drum speed (630and730). The minimum motor displacement may be limited (e.g., during the second stage of control, etc.) based on at least one of a target hydraulic pressure (e.g., differential, gauge, absolute, etc.), a maximum hydraulic pressure engine load, engine torque, pump torque, motor torque, etc. The minimum motor displacement may additionally or alternatively be limited based on a mechanical or software lower threshold. The engine speed may be limited (e.g., during the third stage of control, etc.) based on at least one of engine load, engine torque, pump torque, motor torque, hydraulic pressure, etc. Engine speed control may be employed after motor displacement control and pump displacement control have been utilized. The motor displacement may be kept as high as possible until necessary for speed control (e.g., and improve component durability, etc.). The order of control used to achieve a given speed may include, by way of example, increasing pump displacement until it is at a maximum, then decreasing motor displacement until it is at the minimum threshold (e.g., acceptable, etc.) level, and thereafter increasing engine speed until the target drum speed is achieved. The control system of the present disclosure may address three constraints that could otherwise impact performance (e.g., reduce or prevent the ability to achieve or maintain the desired combination motor displacement, pump displacement, and engine speed, etc.). By way of example, engine speed may be controllable only when the vehicle is not being driven (e.g., the transmission of the vehicle is in a neutral gear, etc.). By way of another example, the engine may not be configured to provide the required power at a low speed (e.g., a low idle speed, the lowest possible speed, the lowest desired speed, etc.). By way of yet another example, the hydraulic system may be configured to operate below a threshold pressure (e.g., to maintain component durability, etc.). The control system of the present disclosure may monitor engine power by evaluating percent load. In response to the engine percent loading exceeding a threshold level (e.g., 80%, etc.), the controller may be configured to increase engine speed and increase motor displacement to provide a desired drum speed (e.g., to prevent the engine from stalling, etc.). The controller may be configured to preemptively increase the engine speed for elevated drum speeds (e.g., reducing or limiting the prevalence of system “hunting” of engine speed, which may be much more noticeable than valve hunting due to engine noise changes). The control system of the present disclosure may monitor hydraulic pressure (e.g., differential, gauge, absolute, maximum, etc.). In one embodiment, the controller is configured to lower the pressure in response to the pressure exceeding a threshold level (e.g., 4,500 psi, etc.) by increasing motor displacement and increasing engine speed to maintain the desired drum speed. The controller may be configured to preemptively increase the engine speed for elevated drum speeds (e.g., reducing or limiting the prevalence of system “hunting” of engine speed, which may be much more noticeable than valve hunting due to engine noise changes). The present disclosure contemplates methods, systems and program products on memory or other machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products or memory comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, by way of example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. As utilized herein, the terms “approximately”, “about”, “substantially”, and similar terms are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. It should be understood by those of skill in the art who review this disclosure that these terms are intended to allow a description of certain features described and claimed without restricting the scope of these features to the precise numerical ranges provided. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the invention as recited in the appended claims. It should be noted that the term “exemplary” as used herein to describe various embodiments is intended to indicate that such embodiments are possible examples, representations, and/or illustrations of possible embodiments (and such term is not intended to connote that such embodiments are necessarily extraordinary or superlative examples). The terms “coupled,” “connected,” and the like, as used herein, mean the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent) or moveable (e.g., removable, releasable, etc.). Such joining may be achieved with the two members or the two members and any additional intermediate members being integrally formed as a single unitary body with one another or with the two members or the two members and any additional intermediate members being attached to one another. References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below,” etc.) are merely used to describe the orientation of various elements in the figures. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, Z, X and Y, X and Z, Y and Z, or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present, unless otherwise indicated. It is important to note that the construction and arrangement of the elements of the systems and methods as shown in the exemplary embodiments are illustrative only. Although only a few embodiments of the present disclosure have been described in detail, those skilled in the art who review this disclosure will readily appreciate that many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.) without materially departing from the novel teachings and advantages of the subject matter recited. For example, elements shown as integrally formed may be constructed of multiple parts or elements. It should be noted that the elements and/or assemblies of the components described herein may be constructed from any of a wide variety of materials that provide sufficient strength or durability, in any of a wide variety of colors, textures, and combinations. Accordingly, all such modifications are intended to be included within the scope of the present inventions. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions, and arrangement of the preferred and other exemplary embodiments without departing from scope of the present disclosure or from the spirit of the appended claims. | 52,320 |
11858172 | DETAILED DESCRIPTION Before turning to the figures, which illustrate the exemplary embodiments in detail, it should be understood that the present application is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology is for the purpose of description only and should not be regarded as limiting. According to an exemplary embodiment, a concrete mixing vehicle includes a drum assembly having a mixing drum, a drive system, and a drum control system. The drum control system may be configured to control the drive system to rotate the mixing drum. Traditional drum control systems may be configured to passively control the rotation and rotational speed of the mixing drum (e.g., at a preset speed, at a preset speed ratio that varies with the engine speed, etc.). According to an exemplary embodiment, the drum control system of the present disclosure is configured to actively control the rotation and/or rotational speed of the mixing drum to provide and/or maintain target properties for the concrete (e.g., a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, etc.) during transportation and/or upon arrival at the job site. By way of example, the drum control system may be configured to monitor the properties of the concrete within the mixing drum (e.g., with a sensor, etc.) and adaptably adjust the rotational speed of the mixing drum to provide concrete having desired or target properties (e.g., in response to the current properties of the concrete approaching and/or reaching the target properties, etc.). The drum control system may monitor the concrete property (e.g., slump, etc.), adjust (e.g., increase, etc.) the drum speed in response to an indication that the property is at, approaching, or above a target level (e.g., a slump at, approaching, or above a target slump level, etc.), and adjust (e.g., decrease, etc.) the drum speed in response to an indication that the property is at, approaching, or below the target level. By way of example, the system may be configured to increase the drum speed in response to an indication that the concrete within the drum is at a six (6) slump and decrease the drum speed in response to an indication that the concrete within the drum is at a four (4) slump. The system may be configured to further decrease drum speed, add water or another substance, etc. to keep the concrete within the drum at the target level. In some embodiments, the drum control system is configured to additionally or alternatively control the rotation and/or rotational speed of the mixing drum based on actual and/or anticipated driving behavior and/or road parameters (e.g., acceleration, deceleration, road grades, speed limit changes, stop signs, traffic lights, road curvature, traffic information, traffic patterns, etc.; to prevent concrete from spilling out of the mixing drum; to maintain a desired speed of the mixing drum as the engine speed varies; etc.). According to an exemplary embodiment, the drum control system of the present disclosure is configured to additionally or alternatively predict a property of a mixture within the mixing drum at delivery based on various data. The various data may include delivery data (e.g., a delivery location, a delivery time, a delivery route, etc.), initial properties of the mixture (e.g., a weight of the mixture, a volume of the mixture, a constituent makeup of the mixture, an initial slump, an initial viscosity, mixed, unmixed, mixed status, etc.), target properties for the mixture (e.g., a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, etc.), environment data (e.g., an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics, road attributes, traffic information/patterns, etc.), mixture data (e.g., current properties of the mixture, etc.), and/or GPS data (e.g., unscheduled stops, road attributes, traffic information/patterns, travel time updates, etc.). The drum control system may be further configured to selectively and/or adaptively control a pump of the drive system (e.g., a throttling element thereof, etc.) to adjust a speed of the mixing drum and provide a target drum speed for the mixing drum (e.g., to achieve a target property for the mixture, based on the predicted delivery properties, etc.). According to the exemplary embodiment shown inFIGS.1-4and10, a vehicle, shown as concrete mixing truck10, includes a drum assembly, shown as drum assembly100, and a control system, shown as drum control system150. According to an exemplary embodiment, the concrete mixing truck10is configured as a rear-discharge concrete mixing truck. In other embodiments, the concrete mixing truck10is configured as a front-discharge concrete mixing truck. As shown inFIG.1, the concrete mixing truck10includes a chassis, shown as frame12, a cab, shown as cab14, coupled to the frame12(e.g., at a front end thereof, etc.). The drum assembly100is coupled to the frame12and disposed behind the cab14(e.g., at a rear end thereof, etc.), according to the exemplary embodiment shown inFIG.1. In other embodiments, at least a portion of the drum assembly100extends in front of the cab14. The cab14may include various components to facilitate operation of the concrete mixing truck10by an operator (e.g., a seat, a steering wheel, hydraulic controls, a user interface, switches, buttons, dials, etc.). As shown inFIGS.1and3, the concrete mixing truck10includes a prime mover, shown as engine16. As shown inFIG.1, the engine16is coupled to the frame12at a position beneath the cab14. The engine16may be configured to utilize one or more of a variety of fuels (e.g., gasoline, diesel, bio-diesel, ethanol, natural gas, etc.), according to various exemplary embodiments. According to an alternative embodiment, the engine16additionally or alternatively includes one or more electric motors coupled to the frame12(e.g., a hybrid vehicle, an electric vehicle, etc.). The electric motors may consume electrical power from an on-board storage device (e.g., batteries, ultra-capacitors, etc.), from an on-board generator (e.g., an internal combustion engine, etc.), and/or from an external power source (e.g., overhead power lines, etc.) and provide power to systems of the concrete mixing truck10. As shown inFIGS.1and3, the concrete mixing truck10includes a power transfer device, shown as transmission18. As shown inFIG.3, the engine16is coupled to the transmission18. In one embodiment, the engine16produces mechanical power (e.g., due to a combustion reaction, etc.) that flows into the transmission18. As shown inFIGS.1and3, the concrete mixing truck10includes a first drive system, shown as vehicle drive system20, that is coupled to the transmission18. The vehicle drive system20may include drive shafts, differentials, and other components coupling the transmission18with a ground surface to move the concrete mixing truck10. As shown inFIG.1, the concrete mixing truck10includes a plurality of tractive elements, shown as wheels22, that engage a ground surface to move the concrete mixing truck10. In one embodiment, at least a portion of the mechanical power produced by the engine16flows through the transmission18and into the vehicle drive system20to power at least a portion of the wheels22(e.g., front wheels, rear wheels, etc.). In one embodiment, energy (e.g., mechanical energy, etc.) flows along a first power path defined from the engine16, through the transmission18, and to the vehicle drive system20. As shown inFIGS.1,2, and10, the drum assembly100of the concrete mixing truck10includes a drum, shown as mixing drum102. The mixing drum102is coupled to the frame12and disposed behind the cab14(e.g., at a rear and/or middle of the frame12, etc.). As shown inFIGS.1,2, and10, the drum assembly100includes a second drive system, shown as drum drive system120, that is coupled to the frame12. The concrete mixing truck10includes a first support, shown as front pedestal106, and a second support, shown as rear pedestal108. According to an exemplary embodiment, the front pedestal106and the rear pedestal108cooperatively couple (e.g., attach, secure, etc.) the mixing drum102to the frame12and facilitate rotation of the mixing drum102relative to the frame12. In an alternative embodiment, the drum assembly100is configured as a stand-alone mixing drum that is not coupled (e.g., fixed, attached, etc.) to a vehicle. In such an embodiment, the drum assembly100may be mounted to a stand-alone frame. The stand-alone frame may be a chassis including wheels that assist with the positioning of the stand-alone mixing drum on a worksite. Such a stand-alone mixing drum may also be detachably coupled to and/or capable of being loaded onto a vehicle such that the stand-alone mixing drum may be transported by the vehicle. As shown inFIGS.1and2, the mixing drum102defines a central, longitudinal axis, shown as axis104. According to an exemplary embodiment, the drum drive system120is configured to selectively rotate the mixing drum102about the axis104. As shown inFIGS.1and2, the axis104is angled relative to the frame12such that the axis104intersects with the frame12. According to an exemplary embodiment, the axis104is elevated from the frame12at an angle in the range of five degrees to twenty degrees. In other embodiments, the axis104is elevated by less than five degrees (e.g., four degrees, three degrees, etc.) or greater than twenty degrees (e.g., twenty-five degrees, thirty degrees, etc.). In an alternative embodiment, the concrete mixing truck10includes an actuator positioned to facilitate selectively adjusting the axis104to a desired or target angle (e.g., manually in response to an operator input/command, automatically according to a control scheme, etc.). As shown inFIGS.1,2, and10, the mixing drum102of the drum assembly100includes an inlet, shown as hopper110, and an outlet, shown as chute112. According to an exemplary embodiment, the mixing drum102is configured to receive a mixture, such as a concrete mixture (e.g., cementitious material, aggregate, sand, etc.), with the hopper110. As shown inFIGS.1and2, the mixing drum102includes a port, shown as injection port130. The injection port130may provide access into the interior of the mixing drum102to inject water and/or chemicals (e.g., air entrainers, water reducers, set retarders, set accelerators, superplasticizers, corrosion inhibitors, coloring, calcium chloride, minerals, and/or other concrete additives, etc.). According to an exemplary embodiment, the injection port130includes an injection valve that facilitates injecting the water and/or the chemicals from a fluid reservoir (e.g., a water tank, etc.) into the mixing drum102to interact with the mixture, while preventing the mixture within the mixing drum102from exiting the mixing drum102through the injection port130. In some embodiments, the mixing drum102includes multiple injection ports130(e.g., two injection ports, three injection ports, etc.) configured to facilitate independently injecting different water and/or chemicals into the mixing drum102. The mixing drum102may include a mixing element (e.g., fins, etc.) positioned within the interior thereof. The mixing element may be configured to (i) agitate the contents of mixture within the mixing drum102when the mixing drum102is rotated by the drum drive system120in a first direction (e.g., counterclockwise, clockwise, etc.) and (ii) drive the mixture within the mixing drum102out through the chute112when the mixing drum102is rotated by the drum drive system120in an opposing second direction (e.g., clockwise, counterclockwise, etc.). As shown inFIGS.2and3, the drum drive system120includes a pump, shown as pump122, a reservoir, shown as fluid reservoir124, and an actuator, shown as drum actuator126. As shown inFIG.3, the fluid reservoir124, the pump122, and the drum actuator126are fluidly coupled. According to an exemplary embodiment, the drum actuator126is a hydraulic motor, the fluid reservoir124is a hydraulic fluid reservoir, and the pump122is a hydraulic pump. The pump122may be configured to pump fluid (e.g., hydraulic fluid, etc.) stored within the fluid reservoir124to drive the drum actuator126. According to an exemplary embodiment, the pump122is configured to facilitate selectively and/or adaptively controlling the output of the drum actuator126. In one embodiment, the pump122includes a variable displacement hydraulic pump (e.g., an axial piston pump, etc.) and has a pump stroke that is variable. The pump122may be configured to pressurize hydraulic fluid based on the pump stroke (e.g., the greater the pump stroke, the higher the pressure, and the faster the drum actuator126rotates the mixing drum102, etc.). The pump122may include a throttling element (e.g., a swash plate, etc.). The pump stroke of the pump122may vary based on the orientation of the throttling element. In one embodiment, the pump stroke of the pump122varies based on an angle of the throttling element (e.g., relative to an axis along which the pistons move within the axial piston pump, etc.). By way of example, the pump stroke may be zero where the angle of the throttling element equal to zero. The pump stroke may increase as the angle of the throttling element increases. In one embodiment, the throttling element of the pump122is movable between a stroked position (e.g., a maximum stroke position, a partially stroked position, etc.) and a destroked position (e.g., a minimum stoke position, a partially destroked position, etc.). According to an exemplary embodiment, an actuator is coupled to the throttling element of the pump122. The actuator may be positioned to move the throttling element between the stroked position and the destroked position. The drum control system150may be configured to generate a first command signal and a second command signal. The first command signal may engage the actuator to move the throttling element of the pump122into the destroked position, thereby decreasing the pump stroke. The second command signal may engage the actuator to move the throttling element of the pump122into the stroked position, thereby increasing the pump stroke. According to another exemplary embodiment, a valve is positioned to facilitate movement of the throttling element between the stroked position and the destroked position. In one embodiment, the valve includes a resilient member (e.g., a spring, etc.) configured to bias the throttling element in the destroked position (e.g., by biasing movable elements of the valve into positions where a hydraulic circuit actuates the throttling element into the destroked positions, etc.). Pressure from fluid flowing through the pump122may overcome the resilient member to actuate the throttling element into the stroked position (e.g., by actuating movable elements of the valve into positions where a hydraulic circuit actuates the throttling element into the stroked position, etc.). In other embodiments, the drum actuator126is or includes an internal combustion engine. In such embodiments, the fluid reservoir124may be configured to store liquid and/or gaseous fuel (e.g., gasoline, diesel, propane, natural gas, hydrogen, etc.), and the pump122may be configured as a fuel pump. In still other embodiments, the drum actuator126is or includes an electric motor. In such embodiments, the fluid reservoir124may be an energy storage device (e.g., a battery, a capacitor, etc.) configured to store and provide chemical and/or electrical energy. The drum drive system120may not include the pump122in such embodiments. According to an exemplary embodiment, the drum actuator126is mounted to the concrete mixing truck10at the same angle as the axis104of the mixing drum102(e.g., such that the output of drum actuator126rotates about an axis parallel to the axis104, etc.). As shown inFIG.2, the drum drive system120includes a drive wheel, shown as drum drive wheel128, coupled to the mixing drum102. The drum drive wheel128may be welded, bolted, or otherwise secured to the head of the mixing drum102. The center of the drum drive wheel128may be positioned along the axis104such that the drum drive wheel128rotates about the axis104. According to an exemplary embodiment, the drum actuator126is coupled to the drum drive wheel128(e.g., with a belt, a chain, etc.) to facilitate driving the drum drive wheel128and thereby rotate the mixing drum102. The drum drive wheel128may be or include a sprocket, a cogged wheel, a grooved wheel, a smooth-sided wheel, a sheave, a pulley, or still another member. In other embodiments, the drum drive system120does not include the drum drive wheel128. By way of example, the drum drive system120may include a gearbox that couples the drum actuator126to the mixing drum102. By way of another example, the drum actuator126(e.g., an output thereof, etc.) may be directly coupled to the mixing drum102(e.g., along the axis104, etc.) to rotate the mixing drum102. As shown inFIG.3, the concrete mixing truck10includes a power takeoff unit, shown as power takeoff unit32, that is coupled to the transmission18. In one embodiment, the transmission18and the power takeoff unit32include mating gears that are in meshing engagement. A portion of the energy provided to the transmission18flows through the mating gears and into the power takeoff unit32, according to an exemplary embodiment. In one embodiment, the mating gears have the same effective diameter. In other embodiments, at least one of the mating gears has a larger diameter, thereby providing a gear reduction or a torque multiplication and increasing or decreasing the gear speed. As shown inFIG.3, the power takeoff unit32is selectively coupled to the pump122, with a clutch34. In some embodiments, the concrete mixing truck10does not include the clutch34. By way of example, the power takeoff unit32may be directly coupled to the pump122(e.g., a direct configuration, a non-clutched configuration, etc.). According to an alternative embodiment, the power takeoff unit32includes the clutch34(e.g., a hot shift PTO, etc.). In one embodiment, the clutch34includes a plurality of clutch discs. When the clutch34is engaged, an actuator forces the plurality of clutch discs into contact with one another, which couples an output of the transmission18with the pump122. In one embodiment, the actuator includes a solenoid that is electronically actuated according to a clutch control strategy. When the clutch34is disengaged, the pump122is not coupled to (i.e., is isolated from) the output of the transmission18. Relative movement between the clutch discs or movement between the clutch discs and another component of the power takeoff unit32may be used to decouple the pump122from the transmission18. In one embodiment, energy flows along a second power path defined from the engine16, through the transmission18and the power takeoff unit32, and into the pump122when the clutch34is engaged. When the clutch34is disengaged, energy flows from the engine16, through the transmission18, and into the power takeoff unit32. The clutch34selectively couples the pump122to the engine16, according to an exemplary embodiment. In one embodiment, energy along the first flow path is used to drive the wheels22of the concrete mixing truck10, and energy along the second flow path is used to operate the drum drive system120(e.g., power the pump122to drive the drum actuator126to thereby rotate the mixing drum102, etc.). Energy may flow along the first flow path during normal operation of the concrete mixing truck10and selectively flow along the second flow path. By way of example, the clutch34may be engaged such that energy flows along the second flow path when the pump122is used to drive the mixing drum102. When the pump122is not used to drive the mixing drum102(e.g., when the mixing drum102is empty, etc.), the clutch34may be selectively disengaged, thereby conserving energy. As shown inFIGS.1,2, and10, the drum assembly100includes a sensor, shown as sensor140. According to an exemplary embodiment, the sensor140includes a mixture sensor that is positioned within the mixing drum102and configured to acquire mixture data indicative of one or more properties of the mixture within the mixing drum102. In one embodiment, the sensor140includes a plurality of mixture sensors (e.g., two, three, four, etc.), each mixture sensor configured to acquire data indicative of at least one of the one or more properties. The one or more properties of the mixture may include a mixture quality, a slump, a consistency of mixture, a viscosity, a temperature, an amount of air entrainment, an amount of water content, a weight, a volume, a rotational velocity, a rotational acceleration, a surface tension, a mixed status, an unmined status, a partially mixed status, etc. of the mixture. The drum control system150may be configured to control the rotational speed of the drum actuator126by selectively controlling the pump122(e.g., the angle of the throttling element thereof, etc.) based on an operator input and/or a property of the mixture within the mixing drum102(e.g., as determined based on the mixture data acquired by the sensor140, etc.) to provide a target or desired property for the mixture. In other embodiments, the sensor140of the drum assembly100does not include the mixture sensors. In some embodiments, the sensors140include one or more drive system sensors. The drive system sensors may be variously positioned on, around, and/or within one or more components of the drum drive system120to acquire drive system data. The drive system data may be indicative of one or more operating characteristics of the drum drive system120. The operating characteristic may include a speed of the mixing drum102, a direction of rotation of the mixing drum102, a pressure associated with the pump122(e.g., a hydraulic pressure, an inlet pressure, an outlet pressure, etc.), another hydraulic system pressure, and/or other operating characteristics of the drum drive system120. In some embodiments, the sensor140includes one or more environment sensors. The environment sensors may be variously positioned on, around, and/or within the concrete mixing truck10to acquire environment data. The environment data may be indicative of an environmental characteristic (e.g., external to the mixing drum102, etc.). The environmental characteristics may include an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics (e.g., rain, snow, fog, etc.), road attributes, traffic information/patterns, etc. The environment sensors may include a temperature sensor, a barometer or other pressure sensor, a humidity sensor, a pitot tube, an altimeter, an accelerometer, a camera, a proximity sensor, and/or other sensors configured to acquire information about the environment external to the mixing drum102. By way of example, during operation, the mixing drum102may be loaded with a concrete mixture through the hopper110. The drum drive system120may be operated to rotate the mixing drum102in a first direction to mix and agitate the concrete mixture contained in the mixing drum102with the mixing element. Water and/or chemicals may be pumped into the mixing drum102through the injection port130to provide a desired property of the concrete mixture and/or to prevent the concrete mixture from setting within the mixing drum102. The concrete mixing truck10may transport the mixture to a job site (e.g., a construction site, etc.). During such transportation, the drum control system150may be configured to selectively and/or adaptively control the drum drive system120(e.g., the pump122to increase or decrease a speed of the drum actuator126, etc.) to provide a target drum speed. The drum control system150may be configured to control the drum drive system120based on mixture data acquired by the sensors140such that the concrete mixture within the mixing drum102has one or more desired or target properties (e.g., a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, etc.) during transportation and/or upon arrival at the job site. Upon arrival at the job site with the concrete mixture having the one or more desired properties, the drum drive system120may be operated to rotate the mixing drum102in an opposing second direction. The rotation of the mixing element in the opposing second direction may cause the mixing element to carry the concrete mixture out of the mixing drum102. The chute112of the drum assembly100may be used to dispense and guide the concrete mixture away from the frame12of the concrete mixing truck10to the concrete mixture's destination (e.g., a concrete form, a wheelbarrow, a concrete pump machine, etc.). Drum Control and Property Prediction System According to the exemplary embodiment shown inFIG.4, the drum control system150for the drum assembly100of the concrete mixing truck10includes a controller, shown as drum assembly controller160. In one embodiment, the drum assembly controller160is configured to selectively engage, selectively disengage, control, and/or otherwise communicate with components of the drum assembly100and/or the concrete mixing truck10(e.g., actively control the components thereof, etc.). As shown inFIG.4, the drum assembly controller160is coupled to the engine16, the clutch34, the drum drive system120(e.g., the pump122, etc.), the injection port130(e.g., the injection valve thereof, etc.), the sensor(s)140, a user interface188, and a global positioning system (GPS)190. In other embodiments, the drum assembly controller160is coupled to more or fewer components. The drum assembly controller160may be configured to predict a property of the mixture within the mixing drum102at delivery based on various data (e.g., delivery data, initial properties, target properties, environment data, mixture data, GPS data, etc.). The drum assembly controller160may be further configured to selectively and/or adaptively control the pump122(e.g., the throttling element thereof, etc.) to adjust a speed of the drum actuator126and provide a target drum speed for the mixing drum102(e.g., to achieve a target property for the mixture, etc.). By way of example, the drum assembly controller160may send and receive signals with the engine16, the clutch34, the drum drive system120, the injection port130, the sensor140, the user interface188, and/or the GPS190. In one embodiment, the drum assembly controller160is configured to selectively turn on and selectively turn off one or more of the various functionalities described herein. The drum assembly controller160may turn on and turn off one or more of the various functionalities automatically, based on user requests during initial manufacture, and/or based on user input. The drum assembly controller160may be implemented as a general-purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a digital-signal-processor (DSP), circuits containing one or more processing components, circuitry for supporting a microprocessor, a group of processing components, or other suitable electronic processing components. According to the exemplary embodiment shown inFIG.4, the drum assembly controller160includes a processing circuit162having a processor164and a memory166. The processing circuit162may include an ASIC, one or more FPGAs, a DSP, circuits containing one or more processing components, circuitry for supporting a microprocessor, a group of processing components, or other suitable electronic processing components. In some embodiments, the processor164is configured to execute computer code stored in the memory166to facilitate the activities described herein. The memory166may be any volatile or non-volatile computer-readable storage medium capable of storing data or computer code relating to the activities described herein. According to an exemplary embodiment, the memory166includes computer code modules (e.g., executable code, object code, source code, script code, machine code, etc.) configured for execution by the processor164. As shown inFIG.4, the memory166includes various modules for completing processes described herein. More particularly, the memory166includes an engine module168, an input/output (I/O) module170, a GPS module172, and a concrete property module174including a sensor module176, a prediction module178, a recording module180, a drive module182, and an injection module184. While various modules with particular functionality are shown inFIG.4, it should be understood that the drum assembly controller160and the memory166may include any number of modules for completing the functions described herein. For example, the activities of multiple modules may be combined as a single module and additional modules with additional functionality may be included. Further, it should be understood that the drum assembly controller160may further control other processes beyond the scope of the present disclosure. As shown inFIG.4, the engine module168is coupled to the engine16. The engine module168may be configured to receive engine data from the engine16. The engine data may include performance characteristics of the engine16including engine speed (e.g., revolutions-per-minute (RPMs), etc.), engine torque, and/or engine acceleration. As shown inFIG.4, the engine module168is coupled to the concrete property module174such that the concrete property module174may receive and interpret the engine data when controlling the drum drive system120. As shown inFIG.4, the I/O module170is coupled to the user interface188. In one embodiment, the user interface188includes a display and an operator input. The display may be configured to display a graphical user interface, an image, an icon, a notification, and/or still other information. In one embodiment, the display includes a graphical user interface configured to provide general information about the concrete mixing truck10(e.g., vehicle speed, fuel level, warning lights, etc.). The graphical user interface may also be configured to display a speed of the mixing drum102, an indication of one or more predicted properties of the mixture within the mixing drum102at delivery (e.g., temperature, viscosity, slump, mix quality, an amount of air entrainment, water content, a weight, a volume, etc.), a notification in response to the one or more properties of the mixture reaching a target value/amount (e.g., a desired slump, temperature, viscosity, mix quality, amount of air entrainment, water content, etc.), and/or still other information relating to the drum assembly100and/or the mixture within the mixing drum102. The operator input may be used by an operator to provide commands and/or information (e.g., initial properties of the mixture, target properties for the mixture, delivery data for the mixture, etc.) to at least one of the clutch34, the drum drive system120, the injection port130, the I/O module170, the GPS module172, the concrete property module174, and the GPS190. The operator input may include one or more buttons, knobs, touchscreens, switches, levers, joysticks, pedals, a steering wheel, and/or handles. The operator input may facilitate manual control of some or all aspects of the operation of the concrete mixing truck10. It should be understood that any type of display or input controls may be implemented with the systems and methods described herein. The I/O module170may be configured to receive information regarding initial properties of the mixture and/or target properties for the mixture from the user interface188, from a customer device, and/or from a device of the concrete plant. The initial properties of the mixture may include a weight of the mixture, a volume of the mixture (e.g., yards of concrete, etc.), a constituent makeup of the mixture (e.g., amount of cementitious material, aggregate, sand, water content, air entrainers, water reducers, set retarders, set accelerators, superplasticizers, corrosion inhibitors, coloring, calcium chloride, minerals, etc.), an initial slump, an initial viscosity, and/or any other properties known about the mixture prior to and/or upon entry thereof into the mixing drum102. The target properties for the mixture may include a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, and/or still other properties. As shown inFIG.4, the I/O module170is coupled to the concrete property module174such that the concrete property module174may receive, interpret, and/or record the initial properties of the mixture and/or the target properties for the mixture to predict the delivery properties for the mixture and/or when controlling the drum drive system120to provide the target properties for the mixture. In some embodiments, at least a portion of the initial properties and/or target properties are predefined within batching software (e.g., a standard initial property in batching software associated with the concrete plant, a standard target property in batching software associated with the concrete plant, software associated with the memory166and/or the concrete property module174of the drum assembly controller160, etc.). The I/O module170may be configured to receive a target drum life for the mixing drum102(e.g., a number of yards and mix of concrete the mixing drum102is designed to receive throughout an operating lifetime thereof, a number of yards of concrete the mixing drum102is designed to receive throughout an operating lifetime thereof without regard for the particular mix of the concrete, an operational life of the mixing element within the mixing drum102, a relationship between mixing element degradation and operational time, etc.) and/or a type of the mixing drum102(e.g., capacity, shape, manufacturer, a front discharge mixing drum, a rear discharge mixing drum, a thickness of a sidewall or other portion of the mixing drum102, type and/or identity of materials the mixing drum102is manufactured from, dimensional characteristics, etc.) from the user interface188and/or from a device of the concrete plant. In some embodiments, at least one of the target drum life and the type of the mixing drum102are predefined within the drum assembly controller160(e.g., the memory166, the drive module182, etc.). The I/O module170may be configured receive delivery data regarding a delivery time, a delivery location (e.g., address of a job site, etc.), and/or a delivery route (e.g., based on road load parameters, etc.) for the mixture from the user interface188. As shown inFIG.4, the I/O module170is coupled to the GPS module172such that the GPS module172may receive the delivery data from the I/O module170. The GPS module172may be configured to transmit the delivery data to the GPS190. The GPS190may be configured to receive and interpret the delivery data from the GPS module172and return GPS data to the GPS module172. The GPS module172may be configured to receive the GPS data from the GPS190. The GPS data may include turn-by-turn driving instructions, travel distance, and/or travel time from a current location of the concrete mixing truck10to the destination. Such information may be transmitted from the GPS module172to the I/O module170for display to the operator on the user interface188to provide route guidance and/or to the concrete property module174for interpretation and/or recordation to predict the delivery properties for the mixture and/or when controlling the drum drive system120to provide the target properties for the mixture. The GPS data may additionally or alternatively include road attributes at and/or ahead of a current location of the concrete mixing truck10. The road attributes may include road grade, road curvature, speed limits, stop sign locations, traffic light locations, road classifications (e.g., arterial, collector, local, etc.), on/off ramp locations, altitude, etc. The road attributes may be utilized and/or monitored to detect changes therein (e.g., changes in elevation, etc.). In some embodiments, the GPS module172is configured to record road attributes (e.g., road grades, stop light locations, stop sign locations, altitude, etc.) without or in addition to receiving the GPS data from the GPS190. In such embodiments, the GPS module172may be configured to learn as the concrete mixing truck10is driving along various routes such that the road attributes are known when the same route is encountered or will be encountered in the future. The GPS data may additionally or alternatively provide information regarding traffic information and/or traffic patterns at and/or ahead of the concrete mixing truck10. The concrete mixing truck10may include various sensors (e.g., accelerometers, gyroscopes, inclinometers, cameras, barometric or other pressure sensors, altimeters, environment sensors, etc.) variously positioned on, around, and/or within the concrete mixing truck10to acquire at least some of the road attributes. The sensors may also be configured to provide information regarding traffic information and/or traffic patterns (e.g., a vehicle slowing down, obstacles in the road, etc.). As shown inFIG.4, the GPS module172is coupled to the concrete property module174such that the concrete property module174may receive, interpret, and/or record the GPS data (e.g., the road attributes, traffic information, and/or traffic patterns from the GPS190; the road attributes, traffic information, and/or traffic patterns from the sensors; etc.) when predicting the delivery properties for the mixture and/or when controlling the drum drive system120to provide the target properties for the mixture. As shown inFIG.4, the sensor module176is coupled to the sensors140(e.g., the mixture sensors, the environment sensors, etc.). The sensor module176may be configured to receive the mixture data and/or the environment data from the sensors140. The mixture data may include one or more current properties of the mixture within the mixing drum102. The one or more properties of the mixture may include a current slump, a current mixture quality, a current viscosity, a current temperature, a current amount of air entrainment, a current water content, a current weight, a current volume, a current rotational velocity, a current rotational acceleration, a current surface tension, a mixed status, an unmixed status, a partially mixed status, etc. of the mixture. The environment data may include one or more environmental characteristics. The environmental characteristics may include an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics (e.g., rain, snow, fog, etc.), traffic information/patterns, road attributes, etc. In some embodiments, the sensor module176is configured to receive at least a portion of the environment data from an internet based service (e.g., a weather and/or topography service that may be accessed by and/or provided to the sensor module176and based on a current location of the concrete mixing truck10, etc.). The sensor module176may be configured to analyze the mixture data to determine various properties of the mixture (e.g., slump, mix status, etc.). By way of example, the sensor module176may employ a fluids and/or physics model configured to analyze various measurable characteristics of the mixture (e.g., velocity, acceleration, viscosity, air contents, surface tension, etc.) to estimate the slump of the mixture (e.g., slump may not be directly measured, etc.). For example, the slump may be determined based on the flow characteristics of the mixture within the mixing drum102as the mixing drum102rotates. According to an exemplary embodiment, the concrete property module174is configured to receive, interpret, and/or record at least one of the engine data (e.g., engine speed, etc.), the initial mixture properties (e.g., a weight of the mixture, a volume of the mixture, a constituent makeup of the mixture, etc.), the GPS data (e.g., road attributes, traffic information, etc.), the mixture data (e.g., current properties of the mixture, etc.), and/or the environment data to predict delivery properties for the mixture within the mixing drum102. The concrete property module174may be further configured to selectively and/or adaptively control the drive speed of the drum drive system120to achieve the target properties (e.g., a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, etc.) for the mixture during transport and/or upon arrival at the destination and/or maintain the target properties if achieved prior to arriving at the destination based on the various data. The prediction module178may be configured to predict delivery properties for the mixture based on the initial properties, the target properties, the delivery data, the environment data, the GPS data, the drive system data, and/or the mixture data. The prediction module178may be configured to additionally or alternatively predict the delivery properties for the mixture based on a current state of the mixing drum102or components thereof. The prediction module178may be configured to additionally or alternatively predict the delivery properties for the mixture based on a current state of the mixing drum102or components thereof relative to one or more associated target life values (e.g., where the mixing drum102is at in its life cycle, where mixing elements or other components of the mixing drum102are at in their life cycle, mint, like-new, average, poor, degraded, etc.). The prediction module178may be configured to additionally or alternatively predict the delivery properties for the mixture based on the type of the mixing drum102. By way of example, the prediction module178may be configured to determine the current state (e.g., the amount of degradation, etc.) of the mixing drum102and/or components thereof (e.g., the mixing element, the fin, etc.). The prediction module178may determine the current state (e.g., using a degradation profile, etc.) based on a time of use, an amount of mixture mixed during the time of use (e.g., yards of mixture, etc.), an average rotational speed of the mixing drum102, a rotational speed profile of the mixing drum102(e.g., a history of speed over time, etc.), and/or still other operational characteristics of the mixing drum102. According to an exemplary embodiment, the current state of the mixing drum102affects the properties of the mixture. In some embodiments, the prediction module178is configured to provide an indication of the predicted delivery properties for the mixture to the I/O module170such that the indication may be displayed to the operator on the user interface188. In some embodiments, the indication is sent to a plant device at a concrete plant and/or a device of a customer. The prediction module178may be configured to continuously and/or periodically update the prediction during transit based on various adjustments performed by the mixing drum102and/or other devices, and/or based on external characteristics. By way of example, the prediction may be updated as the rotational speed of the mixing drum102is adaptively controlled. By way of another example, the prediction may be updated as water and/or chemicals are injected into the mixing drum102. By way of another example, the prediction may be updated as the current properties of the mixture change. By way of still another example, the prediction may be updated as the environmental characteristics (e.g., ambient temperature, altitude, humidity, etc.) change. By way of yet another example, the prediction may be updated as the travel time to the destination changes (e.g., due to accidents, traffic jams, road conditions, detours, etc.). The recording module180may be configured to record the delivery data, the initial properties, the target properties, the predicted delivery properties, the adjustments, the environment data, the mixture data, the GPS data, and/or actual delivery data (e.g., measured by the operator and/or quality personnel and/or the mixture sensor at delivery, etc.) to facilitate generating and/or updating a prediction algorithm stored within and operated by the prediction module178. Such generation and/or updating of the prediction algorithm may facilitate providing more accurate prediction and/or control of a mixture's properties during future deliveries. Additionally, once a sufficient amount of data has been compiled, the prediction algorithm may facilitate the elimination of the mixture sensor from the mixing drum102. By way of example, the initial properties of the mixture may be determined with the sensor140, provide by an operator of the plant, determined with sensors at the plant and provided to the drum assembly controller160, and/or determined using look-up tables (e.g., based on the compiled data, etc.) with the drum assembly controller160and/or thereafter provided to the drum assembly controller160. The predicted delivery properties and/or the mixture data may then be determined by the prediction module178using the prediction algorithm based on the initial properties, various adjustments performed during transit, the environmental data, and/or the GPS data (e.g., using the previously recorded data, look-up tables, etc.) without measurement thereof with a sensor. Such removal of the mixture sensor may reduce the cost to manufacture and operate the concrete mixing truck10. In some embodiments, the prediction module178and/or the recording module180are additionally or alternatively remotely positioned relative to the drum assembly controller160and/or the concrete mixing truck10(e.g., in a remote monitoring and/or command system, etc.). By way of example, the prediction module178and/or the recording module180may be remotely positioned on a server system and operate as a cloud-based system (e.g., a remote monitoring and/or command system, etc.) for the concrete mixing truck10. As such, the data recordation, analysis, and/or determinations made by the drum assembly controller160described herein may be additionally or alternatively performed remotely from the concrete mixing truck10and then communicated to the drum assembly controller160(e.g., the drive module182, the injection module184, etc.) for implementation. As an example, the drum assembly controller160may include a communications interface186that facilitates long-range wireless communication with a remote monitoring and/or command system192. The remote monitoring and/or command system192may include a processing circuit having a processor and a memory, and a communications interface (e.g., like the processing circuit162, the communications interface186, etc. of the drum assembly controller160). The communications interface of the remote monitoring and/or command system192may be configured to receive various information and/or data (e.g., the initial properties, the target properties, the environment data, the GPS data, the mixture data, the en route data, information regarding adjustments made by the drum assembly100, the drive system data, etc.) from the drum assembly controller160and/or other external systems (e.g., a weather service, a topography service, a GPS service, a user input device, a batching system, etc.). The remote monitoring and/or command system192may record and analyze the various information and data and perform the functions of the prediction module178and/or the recording module180described herein. The remote monitoring and/or command system192may further be configured to provide commands to the drum assembly controller160for the drive module182and/or the injection module184to implement (e.g., speed commands, injection commands, etc.). Therefore, any of the functions performed by the drum assembly controller160described herein may be remotely controlled by the remote monitoring and/or command system192. As shown inFIG.4, the drive module182is coupled to the clutch34and the drum drive system120(e.g., the pump122, etc.). The drive module182may be configured to send a clutch command to the clutch34and/or a speed command to the drum drive system120. The clutch command may be transmitted by the drive module182to the clutch34to engage or disengage the clutch34to selectively couple the drum drive system120to the engine16to facilitate rotating the mixing drum102or stopping the rotation thereof. The clutch command may be transmitted in response to a user input to start or stop the rotation of the mixing drum102, in response to the mixing data from the sensor140indicating that a mixture has be poured into or removed from the mixing drum102, and/or in response to receiving a signal from a concrete plant indicating that loading of the mixing drum102has started. In other embodiments, the drive module182does not provide a clutch command (e.g., in embodiments where the concrete mixing truck10does not include the clutch34, etc.). The drive module182may be configured to transmit the speed command to the drum drive system120(e.g., to the pump122, while the clutch34is engaged, etc.) to selectively and/or adaptively control the drive speed of the mixing drum102. In some embodiments, the drive module182is configured to modulate the flow from the pump122(e.g., by controlling the angle/position of the throttling element thereof, etc.) to control the drive speed of the drum actuator126based on the engine speed as indicated by the engine data. By way of example, the drive module182may be configured to actively control the pump122as the concrete mixing truck10is driving such that as the engine speed changes, the drive speed of the mixing drum102remains at a desired or target drive speed. In one example, the drive module182may decrease the angle of the throttling element as the engine speed increases such that the pump122maintains a constant output to maintain the target drive speed of the mixing drum102. In another example, the drive module182may increase the angle of the throttling element as the engine speed decreases such that the pump122maintains a constant output to maintain the target drive speed of the mixing drum102. By way of another example, the drive module182may actively control the pump122in response to actual and/or anticipated accelerations and/or decelerations of the concrete mixing truck10. In an rear-discharge vehicle example, the drive module182may maintain or increase the angle of the throttling element as the concrete mixing truck10accelerates such that the output of the pump122increases, thereby causing the drive speed of the mixing drum102to increase. Such an increase in the drive speed of the mixing drum102may cause the mixing element of the mixing drum102to drive the mixture contained therein forward, preventing the mixture from spilling out of the rear of the mixing drum102. In a front-discharge vehicle example, the drive module182may increase the angle of the throttling element as the concrete mixing truck10decelerates such that the output of the pump122increases, thereby causing the drive speed of the mixing drum102to maintain constant or increase. Such an increase in the drive speed of the mixing drum102may cause the mixing element of the mixing drum102to drive the mixture contained therein rearward, preventing the mixture from spilling out of the front of the mixing drum102. In some embodiments, the drive module182is configured to modulate the flow out the pump122to control the drive speed of the drum actuator126based on the GPS data. By way of example, the drive module182may actively control the pump122as the concrete mixing truck10encounters and/or anticipates that the concrete mixing truck10will encounter various different road parameters. In one example, the GPS data may indicate a road grade increase ahead (e.g., a hill, etc.). In an rear-discharge vehicle example, the drive module182may increase the angle of the throttling element as the concrete mixing truck10approaches a hill such that the output of the pump122increases, thereby causing the drive speed of the mixing drum102to increase. Such an increase in the drive speed of the mixing drum102may cause the mixing element of the mixing drum102to drive the mixture contained therein forward, preventing the mixture from spilling out of the rear of the mixing drum102. In another example, the GPS data may indicate a stop light, a stop sign, a slowing vehicle, and/or other obstacles are ahead of the concrete mixing truck10. In a front-discharge vehicle example, the drive module182may increase the angle of the throttling element in preparation for the deceleration of the concrete mixing truck10such that the output of the pump122increases, thereby causing the drive speed of the mixing drum102to increase. Such an increase in the drive speed of the mixing drum102may cause the mixing element of the mixing drum102to drive the mixture contained therein rearward, preventing the mixture from spilling out of the front of the mixing drum102. In a rear-discharge vehicle example, the drive module182may increase the angle of the throttling element in preparation for the acceleration of the concrete mixing truck10after slowing down and/or stopping such that the output of the pump122increases, thereby causing the drive speed of the mixing drum102to increase. Such an increase in the drive speed of the mixing drum102may cause the mixing element of the mixing drum102to drive the mixture contained therein forward, preventing the mixture from spilling out of the rear of the mixing drum102. In yet another example, the GPS data may indicate that the concrete mixing truck10is (i) approaching and/or traveling on an off ramp and/or (ii) approaching and/or traveling on a corner or curvature in the road. The drive module182may decrease the angle of the throttling element in response to the indication such that the output of the pump122decreases, thereby causing the drive speed of the mixing drum102to decrease. In other embodiments, the drive module182otherwise decreases the drive speed of the mixing drum102in response to the indication. Such a decrease in the drive speed of the mixing drum102may further stabilize the concrete mixing truck10while cornering and/or exiting from highways (e.g., taking an off ramp, etc.). In some embodiments, the drive module182is configured to modulate the flow from the pump122to selectively and/or adaptively control the drive speed of the drum actuator126based on the initial properties of the mixture, the predicted delivery properties (e.g., determined based on the initial properties, the delivery data, the environment data, the mixture data, the GPS data, the engine data, the target properties, the drum life of the mixing drum102, the type of the mixing drum102, etc.), and/or the mixture data indicating the current properties to provide the target properties (e.g., a desired consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, etc.). In some embodiments, the drive module182is additionally or alternatively configured to modulate the flow from the pump122to selectively and/or adaptively control the drive speed of the drum actuator126based on the target drum life for the mixing drum102and/or the type of the mixing drum102. According to an exemplary embodiment, increasing the drive speed of the drum actuator126increases the rotational speed of the mixing drum102. The increase in the rotational speed of the mixing drum102may increase the temperature of the mixture (e.g., reducing the water content thereof, etc.), and decrease the slump while increasing the viscosity of the mixture at an increased rate (e.g., relative to a lower rotational speed, etc.). According to an exemplary embodiment, a reduced drive speed of the drum actuator126provides a decreased rotational speed for the mixing drum102. The decrease in the rotational speed of the mixing drum102may provide a constant or decreased temperature of the mixture and (i) maintain the slump and viscosity of the mixture or (ii) decrease the slump while increasing the viscosity at a reduced rate (e.g., relative to a higher rotational speed, etc.). As shown inFIG.4, the injection module184is coupled to the injection port130(e.g., injection valve thereof, etc.). The injection module184may be configured to send an injection command to the injection port130. The injection command may be transmitted by the injection module184to the injection port130to inject water and/or chemicals into the mixing drum102from the fluid reservoir. In some embodiments, the injection module184is configured to selectively control the valve of the injection port130to adaptively modulate an amount of water and/or chemicals that are injected into the mixing drum102before, during, and/or after transit. Such injection of water and/or chemicals may be used to supplement and/or replace adaptively controlling the drive speed of the mixing drum102to provide the target properties for the mixture. Such injection may be limited to a threshold amount of water and/or chemicals, and/or limited based on GPS location of the concrete mixing truck10. By way of example, the injection module184may be configured to prevent an operator of the concrete mixing truck10and/or the drum control system150from introducing more than a predetermined, threshold amount of water and/or chemicals to the mixture (e.g., indicated by a concrete plant, indicated by the target properties, etc.) to inhibit saturating the mixture with liquid. By way of another example, injection module184may be configured to prevent an operator of the concrete mixing truck10and/or the drum control system150from introducing water and/or chemicals to the mixture based on the GPS location of the concrete mixing truck10. For example, the injection module184may selectively prevent the injection of water and/or chemicals after the concrete mixing truck10arrives at a job site. By way of example, the drive module182may be configured to selectively and/or adaptively control the drive speed of the drum actuator126such that the target properties for the mixture are achieved upon arrival of the concrete mixing truck10at the destination. As an example, the mixing drum102may be filed with a concrete mixture. At least some of the initial properties of the concrete mixture may be entered manually by an operator using the user interface188and/or at least some of the initial properties of the concrete mixture may be acquired by the sensors140. The operator may enter target properties for the concrete mixture (e.g., customer desired properties, etc.) and/or a desired destination for the concrete mixture using the user interface188. The concrete property module174may be configured to determine a target drive speed for the mixing drum102based on (i) the distance, travel time, and/or road parameters between the current location of the concrete mixing truck10and the destination (e.g., indicated by the GPS data, etc.), (ii) the initial properties of the concrete mixture (e.g., manually entered, measured, etc.), and/or (iii) the target properties for the concrete mixture upon arrival. The drive module182may then engage the clutch34using the clutch command (e.g., if the concrete mixing truck10includes the clutch34, etc.) and provide the speed command to the drum drive system120to operate the drum actuator at the target drive speed. During transit, the concrete property module174may be configured to (i) periodically or continually monitor the mixture data with the sensors140indicating the current properties of the concrete mixture to adjust the target drive speed (e.g., to a second drive speed, etc.) if the target properties are being approached too quickly (e.g., slow down the mixing drum102, etc.) or too slowly (e.g., speed up the mixing drum102, etc.) and/or (ii) adjust the target drive speed (e.g., to a second drive speed, etc.) based on the engine data and/or the GPS data (e.g., during acceleration, during deceleration, when encountering hills, when encountering stop signs or stop lights, when encountering traffic, when encountering curves, when encountering on/off ramps, to keep the concrete mixture within the mixing drum102, to further stabilize the concrete mixing truck10, etc.). In some embodiments, the concrete property module174is configured to change (e.g., modify, alter, reduce, increase, etc.) the drive speed of the mixing drum102while measurement of the properties of the concrete mixture is being performed by the sensors140. By way of another example, the drive module182may be configured to selectively and/or adaptively control the drive speed of the drum actuator126to maintain the target properties for the mixture if achieved prior to the concrete mixing truck10arriving at the destination. As an example, the mixing drum102may be filed with a concrete mixture. At least some of the initial properties of the concrete mixture may be entered manually by an operator using the user interface188and/or at least some of the initial properties of the concrete mixture may be acquired by the sensors140. The operator may enter target properties for the concrete mixture (e.g., customer desired properties, etc.). The concrete property module174may be configured to determine a target drive speed for the mixing drum102based on (i) the initial properties of the concrete mixture (e.g., manually entered, measured, etc.) and (ii) the target properties for the concrete mixture. The drive module182may then engage the clutch34using the clutch command (e.g., if the concrete mixing truck10includes the clutch34, etc.) and provide the speed command to the drum drive system120to operate the drum actuator at the target drive speed. During transit, the concrete property module174may be configured to (i) periodically or continually monitor the mixture data with the sensors140indicating the current properties of the concrete mixture to adjust the target drive speed if the target properties are being approached too quickly (e.g., slow down the mixing drum102, etc.) or too slowly (e.g., speed up the mixing drum102, etc.) and/or (ii) adjust the target drive speed based on the engine data and/or the GPS data (e.g., during acceleration, during deceleration, when encountering hills, when encountering stop signs or stop lights, when encountering traffic, when encountering curves, when encountering on/off ramps, to keep the concrete mixture within the mixing drum102, to further stabilize the concrete mixing truck10, etc.). Once the target properties are reached or about to be reached, as indicated by sensor inputs, the concrete property module174may be configured to determine and operate the drum drive system120at a second target drive speed to achieve and/or maintain the target properties (e.g., to prevent overshoot, to prevent reducing the slump too much, to prevent increasing the viscosity too much, from a concrete plant, etc.). Drum Control Methods Referring now toFIG.5, a method500for controlling a drum drive system of a concrete mixing truck is shown, according to an exemplary embodiment. At step502, a mixing drum (e.g., the mixing drum102, etc.) of a mixing vehicle (e.g., the concrete mixing truck10, etc.) receives a mixture (e.g., a wet concrete mixture, etc.). At step504, a controller (e.g., the drum assembly controller160, the remote monitoring and/or command system192, etc.) is configured to receive initial properties of the mixture (e.g., from an operator with the user interface188, etc.) and/or receive measured initial properties of the mixture from a sensor (e.g., the sensor140, etc.). At step506, the controller is configured to receive target properties for the mixture (e.g., from an operator with the user interface188, etc.). In some embodiments, the controller is configured to receive a signal from a batching system at a concrete plant. The signal may contain data indicating that loading of the mixing drum of the mixing vehicle has started and/or is about to start. The controller may be configured to initiate rotation of the mixing drum and/or set the speed of the drum to a desired speed based on the signal from the batching system and/or the target properties. In some embodiments, the controller is configured to rotate the mixing drum based on a GPS location of the mixing truck (e.g., to verify that the mixing truck is at the concrete plant and thereafter rotate the mixing drum, etc.). In other embodiments, the controller is configured to additionally or alternatively rotate the mixing drum based on a sensor input from the sensor indicating that loading has initiated. In still other embodiments, the controller is configured to rotate the mixing drum based on a user input indicating that loading has started and/or is about to start (e.g., using the user interface188, etc.). At step508, the controller is configured to determine a target drive speed for the mixing drum based on the initial properties and the target properties of the mixture. In other embodiments, the target speed is predetermined and sent to the controller from the batching system at the concrete plant. At step510, the controller is configured to operate the mixing drum (e.g., with the drum drive system120, etc.) at the target drive speed. At step512, the controller is configured to monitor the current properties of the mixture using the sensor. In some embodiments, the controller is additionally or alternatively configured to estimate the current properties of the mixture (e.g., in embodiments where the concrete mixing truck10does not include a mixture sensor, the mixture data may be determined using a prediction algorithm based on the initial properties, various adjustments performed during transit, the environmental data, and/or the GPS data without measurement thereof with a sensor, etc.). At step514, the controller is configured to adjust the target drive speed to a second target drive speed based on the current properties approaching and/or reaching the target properties (e.g., to prevent overshoot, etc.). In some embodiments, the controller is additionally or alternatively configured to control an amount of water injected into the mixing drum to supplement or replace adaptively controlling the drive speed of the mixing drum to provide the target properties for the mixture. Such injection may be limited to a threshold amount of water and/or limited based on the GPS location of the mixing truck. Referring now toFIG.6, a method600for controlling a drum drive system of a concrete mixing truck is shown, according to another exemplary embodiment. At step602, a mixing drum (e.g., the mixing drum102, etc.) of a mixing vehicle (e.g., the concrete mixing truck10, etc.) receives a mixture (e.g., a wet concrete mixture, etc.). At step604, a controller (e.g., the drum assembly controller160, the remote monitoring and/or command system192, etc.) is configured to receive initial properties of the mixture (e.g., from an operator with the user interface188, from a batching system at a concrete plant, etc.) and/or receive measured initial properties of the mixture from a sensor (e.g., the sensor140, etc.). At step606, the controller is configured to receive target properties for the mixture (e.g., from an operator with the user interface188, etc.). In some embodiments, the controller is configured to receive a signal from a batching system at a concrete plant. The signal may contain data indicating that loading of the mixing drum of the mixing vehicle has started and/or is about to start. The controller may be configured to initiate rotation of the mixing drum and/or set the speed of the drum to a desired speed based on the signal from the batching system and/or the target properties. In some embodiments, the controller is configured to rotate the mixing drum based on a GPS location of the mixing truck (e.g., to verify that the mixing truck is at the concrete plant and thereafter rotate the mixing drum, etc.). In other embodiments, the controller is configured to additionally or alternatively rotate the mixing drum based on a sensor input from the sensor indicating that loading has initiated. In still other embodiments, the controller is configured to rotate the mixing drum based on a user input indicating that loading has started and/or is about to start (e.g., using the user interface188, etc.). At step608, the controller is configured to receive a desired destination for the mixture (e.g., from an operator using the user interface188, etc.). At step610, the controller is configured to receive GPS data indicating a travel distance, a travel time, traffic information, traffic patterns, and/or road parameters (e.g., from the GPS190, etc.) between a current location and the desired destination. At step612, the controller is configured to determine a target drive speed for the mixing drum based on the initial properties for the mixture, the target properties of the mixture, and/or the GPS data. In other embodiments, the target speed is predetermined and sent to the controller from the batching system at the concrete plant. At step614, the controller is configured to operate the mixing drum (e.g., with the drum drive system120, etc.) at the target drive speed. At step616, the controller is configured to monitor the current properties of the mixture using the sensor. In some embodiments, the controller is additionally or alternatively configured to estimate the current properties of the mixture (e.g., in embodiments where the concrete mixing truck10does not include a mixture sensor, the mixture data may be determined using a prediction algorithm based on the initial properties, various adjustments performed during transit, the environmental data, and/or the GPS data without measurement thereof with a sensor, etc.). At step618, the controller is configured to receive engine data indicating a speed and/or acceleration (or deceleration) of an engine (e.g., the engine16, etc.) of the mixing vehicle. At step620, the controller is configured to adjust the target drive speed to a second target drive speed based on (i) the current properties approaching and/or reaching the target properties (e.g., to prevent overshoot, etc.), (ii) the GPS data (e.g., hills, stop signs, stop lights, traffic, etc.), and/or (iii) the engine data (e.g., acceleration, deceleration, etc.). In some embodiments, the controller is additionally or alternatively configured to control an amount of water injected into the mixing drum to supplement or replace adaptively controlling the drive speed of the mixing drum to provide the target properties for the mixture. Such injection may be limited to a threshold amount of water and/or limited based on the GPS location of the mixing truck. Property Prediction Methods Referring now toFIG.7, a method700for predicting properties of a mixture within a mixing vehicle is shown, according to an exemplary embodiment. Method700may begin with a mixing drum (e.g., the mixing drum102, etc.) of a mixing vehicle (e.g., the concrete mixing truck10, etc.) receiving a mixture (e.g., a wet concrete mixture from a concrete plant, etc.). In some embodiments, a controller (e.g., the drum assembly controller160, etc.) is configured to receive a signal from a batching system at a concrete plant indicating that loading of the mixing drum of the mixing vehicle has started. Such a signal may cause the controller to initiate rotation of the mixing drum and/or set the speed of the drum to a desired speed. In some embodiments, such initiation of the rotation of the mixing drum further utilizes a GPS location of the mixing vehicle to verify that the mixing vehicle is at the concrete plant and being loaded when the signal is sent. In other embodiments, the initiation of the rotation is based on a sensor input from a sensor (e.g., the sensor140, a mixture sensor, etc.) indicating loading has initiated. In still other embodiments, the initiation of the rotation in based on an operator input (e.g., using the user interface188, etc.). At step702, a controller (e.g., the drum assembly controller160, the remote monitoring and/or command system192, etc.) is configured to receive delivery data for the mixture. The delivery data may include a delivery time, a delivery location, and/or a delivery route. In some embodiments, the controller receives at least a portion of the delivery data from a user input (e.g., using the user interface188, etc.). The delivery data may be provided by an operator of the mixing vehicle, an employee at a concrete plant, and/or a customer and transmitted to the controller (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the delivery data from a GPS (e.g., the GPS190, etc.). At step704, the controller is configured to receive initial properties of the mixture. The initial properties of the mixture may include a weight of the mixture, a volume of the mixture, a constituent makeup of the mixture (e.g., amount of cementitious material, aggregate, sand, water content, air entrainers, water reducers, set retarders, set accelerators, superplasticizers, corrosion inhibitors, coloring, calcium chloride, minerals, etc.), an initial slump of the mixture, an initial viscosity of the mixture, and/or any other properties known about the mixture prior to and/or upon entry into the mixing drum. In some embodiments, the controller receives at least a portion of the initial properties from a user input (e.g., using the user interface188, etc.). The initial properties may be input by an operator of the mixing vehicle and/or an employee at a concrete plant (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the initial properties from a sensor (e.g., a mixture sensor positioned within the mixing drum, the sensor140, etc.). According to an exemplary embodiment, the controller is configured to receive environment data. The environment data may be indicative of an environmental characteristic. The environmental characteristics may include an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics (e.g., rain, snow, fog, etc.), traffic information/patterns, road attributes, etc. In some embodiments, the controller receives at least a portion of the environment data from a user input (e.g., using the user interface188, etc.). The environment data may be input by an operator of the mixing vehicle and/or an employee at a concrete plant (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the environment data from a sensor (e.g., a temperature sensor, a barometer or other pressure sensor, a humidity sensor, a pitot tube, an altimeter, an accelerometer, a camera, a proximity sensor, a sensor positioned on the mixing vehicle, the sensor140, etc.). In some embodiments, the controller receives at least a portion of the environment data from an internet based service (e.g., a weather and/or topography service that is accessed by and/or provided to the controller and based on current location of the mixing vehicle, etc.). At step706, the controller is configured to receive target properties for the mixture. The target properties for the mixture may include a consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, and/or still other properties desired for the mixture. According to an exemplary embodiment, the controller receives the target properties from a user input (e.g., using the user interface188, etc.). The target properties may be provided by an operator of the mixing vehicle, an employee at a concrete plant, and/or a customer (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, at least a portion of the initial properties and/or target properties are predefined within batching software (e.g., a standard initial property in batching software associated with the concrete plant, a standard target property in batching software associated with the concrete plant, software associated with the memory166and/or the concrete property module174of the drum assembly controller160, etc.). In some embodiments, the controller is configured to determine and operate the mixing drum (e.g., with the drum drive system120, etc.) at an initial drive speed based on the initial properties of the mixture, the delivery data, the environment data, and/or the target properties for the mixture. In other embodiments, the initial drive speed is predetermined and sent to the controller from the batching system at the concrete plant. In some embodiments, the controller is configured to additionally or alternatively determine and operate the mixing drum at the initial drive speed based on a target drum life for the mixing drum (e.g., a number of yards and mix of concrete the mixing drum is designed to receive throughout an operating lifetime thereof, a number of yards of concrete the mixing drum is designed to receive throughout an operating lifetime thereof without regard for the particular mix of the concrete, etc.) and/or a type of the mixing drum (e.g., capacity, shape, manufacturer, a front discharge mixing drum, a rear discharge mixing drum, a thickness of a sidewall or other portion of the mixing drum, type and/or identity of materials the mixing drum is manufactured from, dimensional characteristics, etc.). At step708, the controller is configured to predict delivery properties for the mixture (i.e., predicted properties for the mixture upon arrival at the destination) based on the delivery data, the initial properties of the mixture, and/or the environmental data. In some embodiments, the controller is configured to additionally or alternatively predict the delivery properties for the mixture based on a target drum life for the mixing drum, a target life of one or more mixing drum components, a current state of the mixing drum (e.g., relative to the target drum life for the mixing drum, etc.), a current state of one or more mixing drum components (e.g., relative to the target life for the one or more mixing drum components, etc.), and/or the type of the mixing drum. At step710, the controller is configured to provide an indication of the predicted delivery properties for the mixture. The predicted delivery properties may include a consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, and/or still other properties predicted for the mixture upon arrival at the destination (e.g., a job site, etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to an operator of the mixing vehicle (e.g., on the user interface188within the cab14, etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to the batching system at the concrete plant (e.g., on a plant computer, etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to a customer (e.g., on a customer device, etc.). At step712, the controller is configured to provide an adjustment within predefined parameters based on the predicted delivery properties, the target properties, a target drum life for the mixing drum, a target life of one or more mixing drum components, a current state of the mixing drum (e.g., relative to the target drum life for the mixing drum, etc.), a current state of one or more mixing drum components (e.g., relative to the target life for the one or more mixing drum components, etc.), and/or the type of the mixing drum. In some embodiments, the adjustment includes adaptively controlling a speed at which a drive system (e.g., the drum drive system120, etc.) rotates the mixing drum (e.g., from a first speed to a second, different speed, etc.). Such control of the rotational speed of the mixing drum may alter the properties of the mixture (e.g., to achieve the target properties for the mixture, etc.). By way of example, increasing the speed of mixing drum may increase the temperature of the mixture to (e.g., reducing the water content thereof, etc.), and decrease the slump while increasing the viscosity of the mixture at an increased rate (e.g., relative to a lower rotational speed, etc.). By way of another example, a reduced speed of the mixing drum may provide a constant or decreased temperature of the mixture and (i) maintain the slump and viscosity of the mixture or (ii) decrease the slump while increasing the viscosity at a reduced rate (e.g., relative to a higher rotational speed, etc.). In some embodiments, the adjustment additionally or alternatively includes adaptively controlling an amount of water and/or chemicals injected from a reservoir into the mixing drum by an injection valve (e.g., the injection valve of the injection port130, etc.). Such injection of water and/or chemicals may be used to supplement and/or replace adaptively controlling the speed of the mixing drum to provide the target properties for the mixture. Such injection may be limited to a threshold amount of water and/or chemicals, and/or limited based on GPS location of the mixing vehicle. By way of example, the controller may be configured to prevent an operator of the mixing vehicle and/or the control scheme from introducing more than a predetermined, threshold amount of water and/or chemicals into the mixture (e.g., indicated by a batching system at a concrete plant, indicated by the target properties, indicated by a customer, etc.) to inhibit saturating the mixture with liquid. By way of another example, the controller may be configured to prevent an operator of the mixing vehicle and/or the control scheme from introducing water and/or chemicals to the mixture based on the GPS location of the mixing vehicle. For example, the controller may selectively prevent the injection of water and/or chemicals after the mixing vehicle arrives at a job site. At step714, the controller is configured to receive en route data. The en route data may include the environment data (e.g., updated environment data, an environmental characteristic such as an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics, traffic information/patterns, road attributes, etc.), mixture data, and/or GPS data. The controller may receive the mixture data from a sensor (e.g., a mixture sensor, the sensor140, etc.) positioned within the mixing drum and/or estimate the mixture data. The mixture data may be indicative of one or more current properties of the mixture within the mixing drum. The controller may receive the GPS data from the GPS. The GPS data may include turn-by-turn driving instructions, travel distance, and/or travel time from a current location of the mixing vehicle to the destination. The GPS data may additionally or alternatively provide information regarding traffic information and/or traffic patterns at and/or ahead of the mixing vehicle. At step716, the controller is configured to update the predicted delivery properties based on the adjustment performed and/or the en route data (e.g., the environment data, the mixture data, the GPS data, etc.). At step718, the controller is configured to determine whether delivery criteria has been satisfied (e.g., the delivery time has been reached, the mixing vehicle has arrived at the delivery location for the mixture, etc.). If the delivery criteria has not been satisfied, the controller is configured to repeat steps710-716. Thus, the controller may be configured to continuously and/or periodically (e.g., every minute, two minutes, five minutes, ten minutes, etc.; every mile, two miles, five miles, ten miles, etc.) (i) provide indications of the predicted delivery properties, (ii) make adjustments based on the predicted delivery properties and/or the target properties, (iii) receive the en route data (e.g., the environment data, the mixture data, the GPS data, etc.), and (iv) update the predicted delivery properties based on the adjustments and/or the en route data. If the delivery criteria has been satisfied, the controller is configured to provide an indication of the actual delivery properties of the mixture and/or the predicted delivery properties for the mixture. In some embodiments, the indication of the actual properties of the mixture is provided to an operator of the mixing vehicle (e.g., on the user interface188within the cab14, etc.). In some embodiments, the indication of the actual delivery properties of the mixture is provided to a concrete plant (e.g., on a plant computer, the batching system etc.). In some embodiments, the indication of the actual delivery properties of the mixture is provided to a customer (e.g., on a customer device, etc.). The actual delivery properties may be acquired and transmitted to the controller by the sensor within the mixing drum and/or manually determined and entered into the user interface by the operator and/or a quality personnel. The actual delivery properties of the mixture and the predicted delivery properties for the mixture may be compared and used for further processing. Referring now toFIG.8, a method800for predicting properties of a mixture within a mixing vehicle is shown, according to another exemplary embodiment. Method800may begin with a mixing drum (e.g., the mixing drum102, etc.) of a mixing vehicle (e.g., the concrete mixing truck10, etc.) receiving a mixture (e.g., a wet concrete mixture from a concrete plant, etc.). In some embodiments, a controller (e.g., the drum assembly controller160, etc.) is configured to receive a signal from a batching system at a concrete plant indicating that loading of the mixing drum of the mixing vehicle has started. Such a signal may cause the controller to initiate rotation of the mixing drum and/or set the speed of the drum to a desired speed. In some embodiments, such initiation of the rotation of the mixing drum further utilizes a GPS location of the mixing vehicle to verify that the mixing vehicle is at the concrete plant and being loaded when the signal is sent. In other embodiments, the initiation of the rotation is based on a sensor input from a sensor (e.g., the sensor140, a mixture sensor, etc.) indicating loading has initiated. In still other embodiments, the initiation of the rotation in based on an operator input (e.g., using the user interface188, etc.). At step802, a controller (e.g., the drum assembly controller160, the remote monitoring and/or command system192, etc.) is configured to receive and record delivery data for the mixture. The delivery data may include a delivery time, a delivery location, and/or a delivery route. In some embodiments, the controller receives at least a portion of the delivery data from a user input (e.g., using the user interface188, etc.). The delivery data may be provided by an operator of the mixing vehicle, an employee at a concrete plant, and/or a customer and transmitted to the controller (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the delivery data from a GPS (e.g., the GPS190, etc.). At step804, the controller is configured to receive and record initial properties of the mixture. The initial properties of the mixture may include a weight of the mixture, a volume of the mixture, a constituent makeup of the mixture (e.g., amount of cementitious material, aggregate, sand, water content, air entrainers, water reducers, set retarders, set accelerators, superplasticizers, corrosion inhibitors, coloring, calcium chloride, minerals, etc.), an initial slump of the mixture, an initial viscosity of the mixture, and/or any other properties known about the mixture prior to and/or upon entry into the mixing drum. In some embodiments, the controller receives at least a portion of the initial properties from a user input (e.g., using the user interface188, etc.). The initial properties may be input by an operator of the mixing vehicle and/or an employee at a concrete plant (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the initial properties from a sensor (e.g., a mixture sensor positioned within the mixing drum, the sensor140, etc.). According to an exemplary embodiment, the controller is configured to receive and record environment data. The environment data may be indicative of an environmental characteristic. The environmental characteristics may include an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics (e.g., rain, snow, fog, etc.), traffic information/patterns, road attributes, etc. In some embodiments, the controller receives at least a portion of the environment data from a user input (e.g., using the user interface188, etc.). The environment data may be input by an operator of the mixing vehicle and/or an employee at a concrete plant (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the environment data from a sensor (e.g., a temperature sensor, a barometer or other pressure sensor, a humidity sensor, a pitot tube, an altimeter, a sensor positioned on the mixing vehicle, the sensor140, etc.). In some embodiments, the controller receives at least a portion of the environment data from an internet based service (e.g., a weather and/or topography service that is accessed by and/or provided to the controller and based on current location of the mixing vehicle, etc.). At step806, the controller is configured to receive and record target properties for the mixture. The target properties for the mixture may include a consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, and/or still other properties desired for the mixture. According to an exemplary embodiment, the controller receives the target properties from a user input (e.g., using the user interface188, etc.). The target properties may be provided by an operator of the mixing vehicle, an employee at a concrete plant, and/or a customer (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, at least a portion of the target properties are predefined within batching software (e.g., a standard initial property in batching software associated with the concrete plant, a standard target property in batching software associated with the concrete plant, software associated with the memory166and/or the concrete property module174of the drum assembly controller160, etc.). In some embodiments, the controller is configured to determine and operate the mixing drum (e.g., with the drum drive system120, etc.) at an initial drive speed based on the initial properties of the mixture, the delivery data, the environment data, and/or the target properties for the mixture. In other embodiments, the initial drive speed is predetermined and sent to the controller from the batching system at the concrete plant. In some embodiments, the controller is configured to additionally or alternatively determine and operate the mixing drum at the initial drive speed based on a target drum life for the mixing drum (e.g., a number of yards and mix of concrete the mixing drum is designed to receive throughout an operating lifetime thereof, a number of yards of concrete the mixing drum is designed to receive throughout an operating lifetime thereof without regard for the particular mix of the concrete, etc.) and/or a type of the mixing drum (e.g., capacity, shape, manufacturer, a front discharge mixing drum, a rear discharge mixing drum, a thickness of a sidewall or other portion of the mixing drum, type and/or identity of materials the mixing drum is manufactured from, dimensional characteristics, etc.). At step808, the controller is configured to predict and record delivery properties for the mixture (i.e., predicted properties for the mixture upon arrival at the destination) based on the delivery data, the initial properties of the mixture, and/or the environmental data. In some embodiments, the controller is configured to additionally or alternatively predict the delivery properties for the mixture based on a target drum life for the mixing drum, a target life of one or more mixing drum components, a current state of the mixing drum (e.g., relative to the target drum life for the mixing drum, etc.), a current state of one or more mixing drum components (e.g., relative to the target life for the one or more mixing drum components, etc.), and/or the type of the mixing drum. At step810, the controller is configured to provide an indication of the predicted delivery properties for the mixture. The predicted delivery properties may include a consistency, mixture quality, amount of air entrainment, viscosity, slump, temperature, water content, and/or still other properties predicted for the mixture upon arrival at the destination (e.g., a job site, etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to an operator of the mixing vehicle (e.g., on the user interface188within the cab14, etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to a concrete plant (e.g., on a plant computer, the batching system etc.). In some embodiments, the indication of the predicted delivery properties for the mixture is provided to a customer (e.g., on a customer device, etc.). At step812, the controller is configured to provide and record an adjustment within predefined parameters based on the predicted delivery properties, the target properties, a target drum life for the mixing drum, a target life of one or more mixing drum components, a current state of the mixing drum (e.g., relative to the target drum life for the mixing drum, etc.), a current state of one or more mixing drum components (e.g., relative to the target life for the one or more mixing drum components, etc.), and/or the type of the mixing drum. In some embodiments, the adjustment includes adaptively controlling a speed at which a drive system (e.g., the drum drive system120, etc.) rotates the mixing drum (e.g., from a first speed to a second, different speed, etc.). Such control of the rotational speed of the mixing drum may alter the properties of the mixture (e.g., to achieve the target properties for the mixture, etc.). By way of example, increasing the speed of mixing drum may increase the temperature of the mixture (e.g., reducing the water content thereof, etc.), and decrease the slump while increasing the viscosity of the mixture at an increased rate (e.g., relative to a lower rotational speed, etc.). By way of another example, a reduced speed of the mixing drum may provide a constant or decreased temperature of the mixture and (i) maintain the slump and viscosity of the mixture or (ii) decrease the slump while increasing the viscosity at a reduced rate (e.g., relative to a higher rotational speed, etc.). In some embodiments, the adjustment additionally or alternatively includes adaptively controlling an amount of water and/or chemicals injected from a reservoir into the mixing drum by an injection valve (e.g., the injection valve of the injection port130, etc.). Such injection of water and/or chemicals may be used to supplement and/or replace adaptively controlling the speed of the mixing drum to provide the target properties for the mixture. Such injection may be limited to a threshold amount of water and/or chemicals, and/or limited based on GPS location of the mixing vehicle. By way of example, the controller may be configured to prevent an operator of the mixing vehicle and/or the control scheme from introducing more than a predetermined, threshold amount of water and/or chemicals into the mixture (e.g., indicated by a batching system at a concrete plant, indicated by the target properties, indicated by a customer, etc.) to inhibit saturating the mixture with liquid. By way of another example, the controller may be configured to prevent an operator of the mixing vehicle and/or the control scheme from introducing water and/or chemicals to the mixture based on the GPS location of the mixing vehicle. For example, the controller may selectively prevent the injection of water and/or chemicals after the mixing vehicle arrives at a job site. At step814, the controller is configured to receive and record en route data. The en route data may include the environment data (e.g., updated environment data, an environmental characteristic such as an ambient temperature, a relative humidity, wind speed, elevation, precipitation characteristics, traffic information/patterns, road attributes, etc.), mixture data, and/or GPS data. The controller may receive the mixture data from a sensor (e.g., a mixture sensor, the sensor140, etc.) positioned within the mixing drum and/or estimate the mixture data. The mixture data may be indicative of one or more current properties of the mixture within the mixing drum. The controller may receive the GPS data from the GPS. The GPS data may include turn-by-turn driving instructions, travel distance, and/or travel time from a current location of the mixing vehicle to the destination. The GPS data may additionally or alternatively provide information regarding traffic information and/or traffic patterns at and/or ahead of the mixing vehicle. At step816, the controller is configured to update and record the predicted delivery properties based on the adjustment performed and/or the en route data (e.g., the environment data, the mixture data, the GPS data, etc.). At step818, the controller is configured to determine whether delivery criteria has been satisfied (e.g., the delivery time has been reached, the mixing vehicle has arrived at the delivery location for the mixture, etc.). If the delivery criteria has not been satisfied, the controller is configured to repeat steps810-816. Thus, the controller may be configured to continuously and/or periodically (e.g., every minute, two minutes, five minutes, ten minutes, etc.; every mile, two miles, five miles, ten miles, etc.) (i) provide indications of the predicted delivery properties, (ii) make and record adjustments based on the predicted delivery properties and/or the target properties, (iii) receive and record the en route data (e.g., the environment data, the mixture data, the GPS data, etc.), and (iv) update and record the predicted delivery properties based on the adjustments and/or the en route data. If the delivery criteria has been satisfied, the controller is configured to provide the indication of the predicted delivery properties for the mixture (step820). At step822, the controller is configured to receive and record actual delivery properties of the mixture. In some embodiments, the controller receives at least a portion of the actual delivery properties from a user input (e.g., using the user interface188, manually determined and entered, etc.). The actual properties may be provided by an operator of the mixing vehicle, a quality personnel, and/or a customer (e.g., remotely, wirelessly, via a wired connection, onboard the mixing vehicle, etc.). In some embodiments, the controller receives at least a portion of the actual properties from a sensor (e.g., a mixture sensor positioned within the mixing drum, the sensor140, etc.). At step824, the controller is configured to provide an indication of the actual delivery properties of the mixture. In some embodiments, the indication of the actual properties of the mixture is provided to an operator of the mixing vehicle (e.g., on the user interface188within the cab14, etc.). In some embodiments, the indication of the actual delivery properties of the mixture is provided to a concrete plant (e.g., on a plant computer, a batching system, etc.). In some embodiments, the indication of the actual delivery properties of the mixture is provided to a customer (e.g., on a customer device, etc.). According to an exemplary embodiment, the controller is configured to record the delivery data, the initial properties, the target properties, the predicted delivery properties, the adjustments, the en route data (e.g., the environment data, the mixture data, the GPS data, etc.), and/or the actual delivery data to facilitate generating and/or updating a prediction algorithm stored within and operated by the controller. Such generation and/or updating of the prediction algorithm may facilitate providing more accurate prediction and/or control of a mixture's properties in future deliveries. Additionally, once a sufficient amount of data has been compiled, the prediction algorithm may facilitate the removal of the mixture sensor from the mixing vehicle. By way of example, the initial properties of the mixture may be input by the batching system at the plant, determined with sensors at the plant, and/or determined using look-up tables (e.g., based on the compiled data, etc.). The predicted delivery properties and/or the mixture data may be determined based on the initial properties, various adjustments made during transit, the environmental data, and/or the GPS data (e.g., using the compiled data, look-up tables, etc.) without needing to be directly measured with a sensor. Such removal of the mixture sensor may thereby reduce the cost to manufacture and operate the mixing vehicle. Referring now toFIG.9, a method900for determining a combination of ingredients is sufficiently mixed is shown, according to another exemplary embodiment. At step902, a mixing drum (e.g., the mixing drum102, etc.) of a mixing vehicle (e.g., the concrete mixing truck10, etc.) receives a combination of ingredients (e.g., a non-wet mixture, a non-mixed combination of ingredients, etc.). By way of example, the combination of ingredients may include various unmixed constituents when deposited into the mixing drum (e.g., cementitious materials, aggregate, sand, rocks, water, additives, absorbent materials, etc.). At step904, a controller (e.g., the drum assembly controller160, the remote monitoring and/or command system192, etc.) is configured to provide a command to a drive system (e.g., the drum drive system120, etc.) to mix the combination of ingredients within the mixing drum. At step906, the controller is configured to estimate and/or monitor a property of the combination of ingredients (e.g., a slump, a consistency, a homogeneity, a moisture content, etc.; with a sensor; using a model, algorithm, look up table, etc.; etc.). At step908, the controller is configured to determine the combination of ingredients has been sufficiently mixed (e.g., based on the property, the combination of ingredients has been combined to form a wet concrete mixture, etc.). At step910, the controller is configured to implement a drum control process (e.g., method500, method600, etc.) and/or a property prediction process (e.g., method700, method800, etc.). Command Control and Monitoring System According to the exemplary embodiment shown inFIGS.10-13, the concrete mixing truck10includes a command control and monitoring system including the sensors140, the drum control system150, and the user interface188. The command control and monitoring system is configured to facilitate an operator in providing commands to various components of the concrete mixing truck10(e.g., the engine16, the drum drive system120, the sensors140, the user interface188, etc.), according to an exemplary embodiment. The command control and monitoring system is additionally or alternatively configured to facilitate an operator in monitoring various components of the concrete mixing truck10based on diagnostic information regarding the various components, according to an exemplary embodiment. As shown inFIGS.10and11, the user interface188includes a first interface, shown as display device200, a second interface, shown as cab input device210, and a third interface, shown as rear input device220. As shown inFIG.10, the display device200and the cab input device210are positioned within the cab14, and the rear input device220is positioned external from the cab14at the rear of the drum assembly100. In other embodiments, the rear input device220is otherwise positioned about the exterior of the concrete mixing truck10. As shown inFIG.11, the display device200includes a screen, shown as display screen202. According to an exemplary embodiment, the display screen202of the display device200is configured as a touchscreen display (e.g., a tablet, a touchscreen monitor, etc.). The display device200may be configured to display diagnostic information regarding the operational functionality and/or state of various components of the concrete mixing truck10(e.g., faults, etc.), operating data regarding current operating parameters of various component of the concrete mixing truck10, indicia, graphical user interfaces (“GUIs”), and/or still other information to an operator within the cab14of the concrete mixing truck10. The display device200may be configured to facilitate providing commands to one or more components of the concrete mixing truck10(e.g., the drum drive system120, the sensors140, the drum control system150, etc.) from within the cab14of the concrete mixing truck10. As shown inFIG.11, the cab input device210includes a command interface, shown as cab control pad212, having various buttons and an input, shown as joystick214. According to an exemplary embodiment, the various buttons of the cab control pad212facilitate selecting one or more components to control with the joystick214, selecting a mode of operation of the drum assembly100, and/or activing/deactivating various components of the concrete mixing truck10from within the cab14. By way of example, the cab control pad212and/or the joystick214may facilitate controlling a rotational direction of the mixing drum102, controlling a speed of the mixing drum102, controlling an angle of the chute112, controlling an injection of fluid (e.g., water, chemical additives, etc.) into the mixing drum102, stopping the rotation of the mixing drum102, starting the rotation of the mixing drum102, locking and unlocking one or more components of the drum assembly100, raising and lowering an additional axle of the concrete mixing truck10(e.g., for increased loading conditions, etc.), discharging the mixture from the mixing drum102, and/or otherwise controlling one or more components of the concrete mixing truck10from within the cab14. According to an exemplary embodiment, the rear input device220includes a second control pad or rear control pad having various buttons (e.g., similar to the cab control pad212of the cab input device210, etc.). The various buttons of the second control pad of the rear input device220may facilitate selecting one or more components to control (e.g., with the joystick214, with the rear input device220, etc.), selecting a mode of operation of the drum assembly100, and/or activing/deactivating various components of the concrete mixing truck10from outside of the concrete mixing truck10. As shown inFIG.12, the display screen202of the display device200is configured to display a first graphical user interface, shown as status GUI230. The status GUI230includes various features such as a settings button232, a mode button234, a command bar236, a drum status indicator238, and a mixture status indicator240. The setting button232may facilitate adjusting the information displayed on the status GUI230and/or adjusting the settings of the display device200(e.g., a brightness, etc.). The mode button234may indicate a current mode the drum assembly100is operating in and/or facilitate changing the current mode. The command bar236may indicate the current commands that are being provided to the drum assembly100. The drum status indicator238may indicate the speed of the mixing drum102and/or the direction of rotation of the mixing drum102. The mixture status indicator240may display the mixture data and indicate one or more properties of the mixture within the mixing drum102. By way of example, the one or more properties of the mixture may include a mixture quality, a slump, a consistency of mixture, a viscosity, a temperature, an amount of air entrainment, an amount of water content, a weight, a volume, a rotational velocity, a rotational acceleration, a surface tension, etc. of the mixture. As shown inFIG.13, the display screen202of the display device200is configured to display a second graphical user interface, shown as command GUI250. The command GUI250includes a first section, shown as first keypad section252, a second section, shown as second keypad section254, and a third section, shown as joystick section256. According to an exemplary embodiment, the first keypad section252is associated with the cab control pad212of the cab input device210, the second keypad section254is associated with the rear control pad of the rear input device220, and the joystick section256is associated with the joystick214. By way of example, when a button is pressed on the cab control pad212of the cab input device210, the associated button in the first keypad section252of the command GUI250may illuminate, change color, become highlighted, and/or otherwise change to indicate that the associated button has been pressed on the cab control pad212. By way of another example, when a button is pressed on the rear control pad of the rear input device220, the associated button in the second keypad section254of the command GUI250may illuminate, change color, become highlighted, and/or otherwise change to indicate that the associated button has been pressed on the rear input device220. By way of yet another example, a degree of engagement of the joystick214may be represented by a sliding indicator bar of the joystick section256(e.g., the more the bar is filled the faster the speed of the mixing drum102may be, etc.). In some embodiments, the display device200is additionally or alternatively configured to display at least one of a chute diagnostics GUI, a fuse diagnostics GUI, a drum diagnostics GUI, and/or other diagnostics GUIs to indicate the status, mode, and/or faults of various components of the concrete mixing truck10. The chute diagnostics GUI may be configured to display the status and/or position of the chute112(e.g., up, down, angled left, angled right, centered, locked, unlocked, etc.) and information regarding the circuits thereof. The fuse diagnostics GUI may be configured to indicate whether each respective fuse of the concrete mixing truck10is either operational or blown. The drum diagnostics GUI may be configured to display any electrical issues with the drum assembly100such as shorts, open circuits, improper installation, etc. and/or display the mode, status, and/or operational parameters of components of the drum assembly100(e.g., activation of a drum stop solenoid, a drum charge solenoid, a drum discharge solenoid, etc.; a drum speed; a drum direction; etc.). According to an exemplary embodiment, the command control and monitoring system is configured to facilitate diagnosing faults and identifying the probable location of the faults on concrete mixing truck10. By way of example, when a fault is diagnosed by the command control and monitoring system, the display device200may provide a GUI having a graphical representation of the concrete mixing truck10(e.g., similar to that shown inFIG.10, etc.) indicating the location of the fault on the concrete mixing truck10and/or a suggested solution. For example, components experiencing a fault may be displayed in a different color (e.g., red, etc.), flashing, highlighted, circled, and/or otherwise identified. In some embodiments, the faults are telematically sent to a remote server or computer (e.g., a truck hub, a repair shop, an owner's business, etc.). By way of example, the command control and monitoring system may be configured to monitor (i) the mixture sensors configured to acquire the mixture data for monitoring concrete properties of the mixture, (ii) the drive system sensors configured to acquire the drive system data for monitoring the operating characteristics of the drum drive system120, (iii) the environment sensors configured to acquire environment data for monitoring environmental characteristics external to the mixing drum102, and/or (iv) inputs and outputs used to control functions of the concrete mixing truck10(e.g., inputs and outputs of the drum drive system120, the injector device of the injection port130, the engine16, etc.). The command control and monitoring system may be further configured to determine that there is a potential fault with one or more of the sensors (e.g., the mixture sensors, the environment sensors, the drive system sensors, etc.), the input, and/or the output. The command control and monitoring system may be further configured to provide a fault notification on the display device200indicating the potential fault location. In some embodiments, the control and monitoring system is configured monitor a property of the mixture within the mixing drum102and provide an alert when the property begins to deviate from an expected or predicted value. For example, the control and monitoring system may be configured to determine that a property is changing at an increased rate or too slow of a rate, determine a potential fault location based on the property that is changing, and provide a fault notification that indicates the potential fault location. By way of example, the control and monitoring system may recognize that the slump of the mixture is increasing (e.g., becoming less viscous, more fluid, etc.). The control and monitoring system may therefore provide an alert that the slump is increasing at an alarming rate and provide an indication that the injection valve may have been left open or stuck (e.g., frozen open in the winter, etc.). The control and monitoring system may thereby provide an alert on the display device200to check the injection valve to stop the fluid injection and prevent the slump from increasing further from the target slump. According to an exemplary embodiment, the display device200is portable and removable from the cab14(e.g., a tablet, a laptop, a smart device, etc.). The display device200may therefore be capable of capturing pictures of the failed or fault area/component (e.g., to be sent to a technician, etc.). The display device200may additionally or alternatively be capable of being brought to the area of the concrete mixing truck10where the fault originated and provide step-by-step instructions on how to diagnose and troubleshoot the problem. The instructions may be visually displayed and/or audibly provided by the display device200. The display device200may be configured to display data sheets, prints, and/or schematics without having to search or request such information to facilitate the diagnosis and/or troubleshooting. The display device200may be configured to facilitate automatic ordering of replacement parts/components directly therefrom. Further, the display device200may facilitate remote diagnostics from a service/technician center. As utilized herein, the terms “approximately”, “about”, “substantially”, and similar terms are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. It should be understood by those of skill in the art who review this disclosure that these terms are intended to allow a description of certain features described and claimed without restricting the scope of these features to the precise numerical ranges provided. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the invention as recited in the appended claims. It should be noted that the term “exemplary” as used herein to describe various embodiments is intended to indicate that such embodiments are possible examples, representations, and/or illustrations of possible embodiments (and such term is not intended to connote that such embodiments are necessarily extraordinary or superlative examples). The terms “coupled,” “connected,” and the like, as used herein, mean the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent) or moveable (e.g., removable, releasable, etc.). Such joining may be achieved with the two members or the two members and any additional intermediate members being integrally formed as a single unitary body with one another or with the two members or the two members and any additional intermediate members being attached to one another. References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below,” etc.) are merely used to describe the orientation of various elements in the figures. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, Z, X and Y, X and Z, Y and Z, or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present, unless otherwise indicated. It is important to note that the construction and arrangement of the elements of the systems and methods as shown in the exemplary embodiments are illustrative only. Although only a few embodiments of the present disclosure have been described in detail, those skilled in the art who review this disclosure will readily appreciate that many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.) without materially departing from the novel teachings and advantages of the subject matter recited. For example, elements shown as integrally formed may be constructed of multiple parts or elements. It should be noted that the elements and/or assemblies of the components described herein may be constructed from any of a wide variety of materials that provide sufficient strength or durability, in any of a wide variety of colors, textures, and combinations. Accordingly, all such modifications are intended to be included within the scope of the present inventions. Other substitutions, modifications, changes, and omissions may be made in the design, operating conditions, and arrangement of the preferred and other exemplary embodiments without departing from scope of the present disclosure or from the spirit of the appended claims. | 120,156 |
11858173 | DETAILED DESCRIPTION OF THE INVENTION FIG.1depicts an isometric view of an apparatus10for heat treating semi-crystalline or crystallizable polymer powder, which is shown schematically, according to a first exemplary embodiment of the invention. The apparatus10generally comprises a heating device12, a vessel14for containing the powder that is heated by the heating device12, mixing elements such as a grid or sieve16positioned within the interior of the vessel14and at least partially immersed within the powder (not shown), and a means for moving18the vessel14. The heating device12is a mechanism for producing heat and exposing the vessel14and its contents to the heat. The heating device12may be an oven in which the vessel14is positioned. Alternatively, the heating device12may be a heating element that is connected to or part of the vessel14. As another alternative, the vessel14could be heated using circulating hot oil (or other fluid). The heating device12may be any conventional heating mechanism that is known to those skilled in the art and is not limited to that which is shown and described. For example, the heating device may comprise a heating element which converts electricity into heat through the process of resistive heating or it may comprise hot oil which may be circulating. The vessel14is a container having any shape, such as a hollow tube or cylinder, in which the powder is contained. It should be understood that the size and shape of the vessel14can vary. The vessel14is configured to rotate about its longitudinal axis A (also referred to herein as the axis of rotation). The vessel14may be composed of a thermally conductive material, in particular a metal or metal alloy such as stainless steel, such as grades 316L or 904, for example. The ends of the vessel14may be either open (as shown) or closed. A removable lid closure (not shown) may be removably applied over one open end of the vessel14. The vessel14may be a closed system to prevent contamination or the escapement of powder during operation. According to other embodiments, the vessel14may be configured to permit a gas (e.g., an inert gas, such as nitrogen) to flow through the vessel14. The vessel14may be either removably or permanently mounted within the apparatus10. The mixing element may comprise sieve16which is a foraminous plate having holes, pores or openings through which the powder can pass. Alternatively, the sieve16may be provided in the form of a rack having cross-wise dividing members, i.e., like a conventional oven rack. A plurality of sieves may be positioned within the interior of the vessel, according to certain embodiments. The size and shape of the openings in the sieve16may vary. The sieve16may be composed of a thermally conductive material such as stainless steel (such as grades 316L or 904) or other metal or metal alloy, for example. Alternatively, the sieve16may be replaced by a plurality of mixing elements having a compact shape (such as balls, cubes, cylinders or the like) and capable of independent motion within the vessel, composed of either metal or ceramic, for example. As another alternative, the sieve16may be omitted altogether. According to one aspect of the invention, the sieve16is coupled to the vessel14such that the sieve16rotates along with the vessel14. According to a different aspect of the invention, the sieve16is coupled to a fixed point (e.g., on the heating device12) such that the vessel14rotates relative to the stationary sieve16. In yet another aspect of the invention, sieve16can move or rotate in a direction opposite to that of the vessel14. According to one aspect of the invention, the vessel14comprises means17for removing agglomerations or clumps of powder from the surface of the vessel. Means17may be a scraper, brush, wire brush, knife or paddle, for example. Means17may be coupled to a fixed point (e.g., on the heating device12) such that the vessel14rotates relative to the stationary means17. One means18for moving or for example for rotating the vessel14is provided in the form of a roller, as shown. The roller engages the outer circumference of the vessel14for rotating the vessel14(see arrow showing one direction of rotation). The roller18may be configured to rotate the vessel14in both rotational directions. The roller18may be connected to a gear or output shaft of a motor (or other motive device) to cause rotation of the roller18. Alternatively, the vessel14may be directed connected to the motor, and the roller18may be a passive device that permits rotation of the vessel14. A passive roller20also engages the outer circumference of the vessel14at a location spaced from the roller18to permit rotation of the vessel14. A temperature sensor19may be positioned within the vessel14(or the heating device12) for either directly or indirectly sensing the temperature of the powder within the vessel14. The heating device12, the means for moving18, and temperature sensor19are directly or indirectly connected to a controller/processor20. The controller/processor20is configured to control operation of the heating device12and the means for moving18. The controller/processor20is configured to receive signals from the temperature sensor19. A user interface21, such as a display or keypad, is directly or indirectly connected to the controller/processor20for transmitting operating instructions to the controller/processor20. According to one exemplary method of operating the apparatus10for heat treating semi-crystalline or crystallizable polymer powder, the vessel14is first charged with a powder of a semi-crystalline or crystallizable polymer, such as a PAEK powder. The vessel14may then be closed by a cover (not shown). Preferably, the powder may occupy between about 10 and 70 percent of the volume of the vessel14. More preferably, the powder may occupy between about and 60 percent of the volume of the vessel14. More preferably, the powder may occupy between about 30 and 50 percent of the volume of the vessel14. Alternatively, the powder may occupy less than or equal to about 50% of the volume of the vessel14. The heating device12and the means for moving18are then activated by the controller/processor20. Once activated, the heating device12heats the powder within the vessel14to a predetermined temperature, depending upon the composition of the powder. This may be referred to as the heating step. The predetermined heating temperature may be a temperature value that is 20 degrees less, preferably 10 degrees less, more preferably 5 degrees less, than the melting temperature of the highest melting crystalline form of the polymer or a temperature value that is between the two melting points of the two crystalline phases of the polymer, as described in U.S. Pat. No. 9,587,107. For example, the predetermined heating temperature may be 250, or 260, or 270, or 275, or 280, or preferably 285 degrees Celsius. The means for moving18rotates the vessel14about the axis A either (i) continuously in one rotational direction, or (ii) in a reciprocating or rocking fashion (e.g., less than one revolution in one rotational direction and less than one revolution in the opposite rotational direction, or periodically reversing at one or more than one revolution). As another example of reciprocating motion, the vessel14may be rotated in one rotational direction by two or three revolutions and then rotated in an opposite rotational direction by two or three revolutions. This may be referred to as the rotation or movement step. Rotation of the vessel14causes the powder in the vessel14to move and circulate, which promotes substantially even or uniform heating of the powder, faster heating of the powder, and prevents or limits hot spots in the powder. The sieve16(or mixing element) allows for separation of agglomerates formed and mixes the powder, therefore improving homogeneity and quality of the powder. The powder moves relative to the heated interior surfaces of the vessel14due to the motion of the vessel14and gravity. Accordingly, the grains of the powder move and do not stay fixed in place on the heated interior surfaces of the vessel14as the vessel14rotates about the axis A. As the powder passes through the sieve16, the sieve16either limits or substantially prevents the formation of agglomerates in the powder and/or separates agglomerates that may be formed. The heating and rotation steps are typically performed simultaneously for a first period of time during which the powder is heated from a starting (e.g., room) temperature to a predetermined temperature. The first period of time may be greater than 15 minutes and less than 10 hours, preferably less than 6 hours, more preferably 5 hours or less, more preferably 3 hours or less, and even more preferably, greater than 30 minutes and less than 2 hours. Once the predetermined temperature is reached, the heating and rotation steps may be continued for a second period of time while the target temperature is maintained at +/−5 degrees C. (i.e., within 5 degrees C., plus or minus, of the target temperature), or preferably at +/−3 degrees C. The second period of time may be at least at least 1 minute to 7 hours, preferably one minute up to about 6 hours, more preferably at least 1 hour to about 5 hours, even more preferably at least about 1 hour to about 4 hours. In another embodiment, the second period of time may be at least 120 minutes, or more. Once the powder is sufficiently heated to the predetermined temperature for the predetermined time, the heat treated powder is optionally cooled down, and removed from the vessel14. The polymer powder obtained is ready for use, for example in selective laser sintering, as described for example in U.S. Pat. No. 9,587,107. In one preferred embodiment, the vessel and/or the product may be cooled, and the cooling time and rate controlled, for example, with forced air, water spray, or a jacket with circulating fluid such as oil, water, or air. Preferably, when the product is cooled, and during the cooling step, rotation is maintained. The cooling time, which is a controlled parameter, is generally as short as possible, e.g., less than 40 minutes, preferably less than 30 minutes, more preferably less than 20 minutes, even more preferably less than 10 minutes. FIGS.2A and2Bdepict another apparatus110for heat treating semi-crystalline or crystallizable polymer powder113, which is shown schematically, according to a second exemplary embodiment of the invention. The apparatus110generally comprises a heating device112, a vessel114for containing the powder that is heated by the heating device112, a grid or sieve116positioned within the interior of the vessel114and at least partially immersed within the powder, a means118for moving (e.g., rotating) the vessel114about axis B, and means119for moving (e.g., rotating) the vessel114about axis C. The vessel114together with the heating device112and the means118and119may be conventional roto-molding equipment. A suitable roto-molding machine is distributed by Ferry Industries, Inc. Roto-molding equipment is described in U.S. Pat. No. 3,474,165 to ConocoPhillips, the disclosure of which is incorporated by reference herein in its entirety and for all purposes. The heating device112is a mechanism for producing heat and exposing the vessel114to the heat. The heating device112may be an oven in which the vessel114is positioned. Alternatively, the heating device112may be a heating element that is connected to or a part of the vessel114. The heating device112may be any conventional heating mechanism that is known to those skilled in the art and is not limited to that which is shown and described. The vessel114is a hollow box-shaped container in which the powder is contained. The vessel114has a removable lid114′. The vessel114is configured to rotate about axes B and C, as will be described in greater detail later. The vessel114is composed of thermally conductive stainless steel, such as grades 316L or 904, for example. The vessel114is a closed system to prevent pollution or the escapement of powder during operation. The sieve116may be a foraminous plate having holes, pores or openings through which the powder can pass. Alternatively, the sieve116may be provided in the form of a rack having cross-wise dividing members, i.e., like a conventional oven rack. The size, shape, and number of the openings in the sieve116may vary. The sieve116may be composed of thermally conductive stainless steel, such as grades 316L or 904, for example. The sieve116may extend between two opposing interior corners of the vessel114, and is substantially stationary with respect to the vessel114. The sieve116may be permitted to move with respect to the vessel114by a slight amount or may be fixed in place within the vessel114. Alternatively, the sieve116may be replaced by metallic balls or similar object as described above. As another alternative, the sieve116may be omitted altogether. The means118for moving the vessel114about axis B is (optionally) a motor having a shaft that is connected (either directly or indirectly) to the base of the vessel114, as shown. The means118may be configured to rotate the vessel114in both rotational directions about axis B. The means119for moving the vessel114about axis C is (optionally) a motor having a shaft that is connected (either directly or indirectly) to the side of the vessel114, as shown. The means119may be configured to rotate the vessel114in both rotational directions about axis C. The means119also rotates the means118as it rotates the vessel114(or vice versa) such that the means118and119can operate at the same time to rotate the vessel114about both axes B and C simultaneously. The means118and119may vary from that which is shown and described. As one alternative, one of the means118and119may be omitted. As another alternative, a third means for moving the vessel may be provided for rotating the vessel about a third axis that is normal to the axes B and C. As yet another alternative, the means118and/or119may shake (or vibrate) the vessel114in a reciprocating fashion along axes B and C, respectively, in lieu of rotation. Another means may shake (or vibrate) the vessel114in a reciprocating fashion along a third axis that is normal to the axes B and C. A temperature sensor123may be positioned within the vessel114(or the heating device112) for either directly or indirectly sensing the temperature of the powder113within the vessel114. The heating device112, the means118and119, and temperature sensor123are directly or indirectly connected to a controller/processor120. The controller/processor120is configured to control operation of the heating device112and the means118and119. The controller/processor120is configured to receive signals from the temperature sensor123. A user interface121, such as a display or keypad, is directly or indirectly connected to the controller/processor120for transmitting operating instructions to the controller/processor120. According to one exemplary method of operating the apparatus110for heat treating semi-crystalline or crystallizable polymer powder, the vessel114is first charged with polymer powder113. The powder may occupy less than the overall volume of the vessel114, as described above. The heating device112and the means118and119are then activated by the controller/processor120. Once activated, the heating device112heats the powder within the vessel114to a predetermined temperature, depending upon the composition of the powder, as explained above with respect to the apparatus10. The temperature is controlled so as to avoid or minimize melting, agglomeration or fusion of the powder. During the heating process, the means118rotates the vessel114about the axis B either (i) continuously in one rotational direction, or (ii) in a reciprocating or rocking fashion (e.g., less than one revolution in one rotational direction and less than one revolution in the opposite rotational direction). Similarly, the means119rotates the vessel114about the axis C either (i) continuously in one rotational direction, or (ii) in a reciprocating or rocking fashion (e.g., less than one revolution in one rotational direction and less than one revolution in the opposite rotational direction). The means118and119may or may not operate simultaneously. This may be referred to as the rotation or movement step. Rotation of the vessel114causes the powder in the vessel114to move and circulate the grains of the powder, which promotes substantially even or uniform heating of the powder and prevents or substantially limits hot spots in the powder. The sieve116(or mixing element) enables separation of any agglomerates formed mixes the powder, thereby improving homogeneity and quality of the powder. The powder moves relative to the heated surfaces of the vessel114. Accordingly, the grains of the powder move and do not stay fixed in place on the vessel114as the vessel114rotates about the axes B and C. As the powder passes through the sieve116, the sieve116limits, or prevents the formation of agglomerates in the powder, or separates any agglomerates that may have formed. The heating and rotation steps are typically performed as described above with respect to the apparatus10. The vessel and/or the product may be cooled, and the cooling time and rate controlled, for example, with forced air, water spray, or a jacket with circulating fluid such as oil, water, or air. Preferably, when the product is cooled, and during the cooling step, rotation is maintained. The cooling time, which is a controlled parameter, is generally as short as possible, e.g., less than 40 minutes, preferably less than 30 minutes, more preferably less than 20 minutes, even more preferably less than 10 minutes. During testing of the apparatus110, it was found that the powder reached a predetermined temperature (e.g. about 285 degrees Celsius) in about ninety minutes and the temperature of the powder was substantially homogeneous, owing to the continuous movement of the powder on the heated surface of the vessel114. Without intending to be bound to any theory, it is believed that rotating the vessel114continuously renews the strata of powder in contact with the heated interior surface of the vessel114so that a new strata of powder is continuously replacing an old strata of powder on the heated surface of the vessel114. Homogeneity of the powder temperature is important because the temperature range of the heat treatment preferably is limited, maintained, and controlled. More particularly, exceeding the maximum temperature of the temperature range of the heat treatment caused melting and agglomeration of the powder. In contrast, effective heat treatment in accordance with the teachings of this invention achieved attainment of a desired modified crystalline structure. FIG.3depicts a typical temperature profile for heat-treatment of polymer powder using roto-molding equipment. That figure shows that it takes approximately 100 minutes for the powder to reach the target temperature of about 275 degrees Celsius. In reducing the invention to practice, the apparatuses10and110were discovered to be commercially viable devices for heat-treating semi-crystalline or crystallizable polymer powder as compared to other known devices, which are described hereinafter in the Comparative Examples section. The types of apparatus and methods described herein are useful for use in connection with powders of semi-crystalline or crystallizable polymers, including polymorphic semi-crystalline or crystallizable polymers. This invention is not limited to the particular preferred embodiments described herein, and further includes any vessel with a means to cause the powder within the vessel to move with respect to the vessel. The polymers which can be used in connection with the present invention include polymorphic semicrystalline polymers and/or polymers capable of becoming semicrystalline upon being subjected to temperatures above the glass transition temperature of the polymer. As used herein, the term “polymorphic semicrystalline or crystallizable polymer” means that the polymer is capable of existing in one or more than one crystalline form and that the polymer has one or more regions that is crystalline and/or is capable of forming one or more regions of crystallinity upon heat treatment. According to various aspects of the invention, powders of polyaryletherketone (PAEK) polymers may be employed. For example, such a PAEK polymer powder may be a powder of a polymer selected from the group consisting of polyetheretherketone (PEEK), polyetherketoneketone (PEKK), polyetherketone (PEK), polyetheretherketoneketone (PEEKK), polyetherbiphenyletherketone (PEDEK) and polyetherketoneetherketoneketone (PEKEKK). Blends or mixtures or copolymers of polyaryletherketones such as PEEK-PEDEK as disclosed in WO 2015/124903 may also be employed within the scope of this invention. Other polymorphic polymers that could benefit from heat-treatment using an apparatus in accordance with the invention or a process in accordance with the invention include, but are not limited to: polyamide 11 (PA11), polyamide 12 (PA12) and polyvinylidene fluoride (PVDF) homopolymers and copolymers. An apparatus or process in accordance with the present invention could also be applicable to polymeric materials with a single crystal form such as PEEK (polyetheretherketone) and PEK (polyether ketone), where the treatment at elevated temperatures will increase the linear degree of crystallinity of the crystalline lamellae, affecting in a direct manner the melting temperature and/or the shape of the melting peak as observed by DSC (during the first heating, as described in ISO 11357) of the final product. The present invention is especially useful for polyetherketoneketones (PEKK). Polyetherketoneketones are well-known in the art and can be prepared using any suitable polymerization technique, including the methods described in the following patents, the disclosure of each of which is incorporated herein by reference in its entirety for all purposes: U.S. Pat. Nos. 3,065,205; 3,441,538; 3,442,857; 3,516,966; 4,704,448; 4,816,556; and 6,177,518. PEKK polymers differ from the general class of PAEK polymers in that they often include two different isomeric repeating units. These repeating units can be represented by the following Formulas and II: -A-C(═O)-B-C(═O)— I -A-C(═O)-D-C(═O)— II where A is a p,p′-Ph—O—Ph-group, Ph is a phenylene radical, B is p-phenylene, and D is m-phenylene. The Formula I:Formula II isomer ratio, commonly referred to as the T:I ratio, in the polyetherketoneketone is selected so as to vary the total crystallinity of the polymer. The T:I ratio is commonly varied from 50:50 to 100:0, and in some embodiments 60:40 to 80:20, or 55:45 to 90:10. A higher T:I ratio such as, 80:20, provides a higher degree of crystallinity as compared to a lower T:I ratio, such as 60:40. According to certain embodiments, the powder treated in accordance with the present invention is a PEKK powder having a T:I ratio of about 60:40, or about 70:30, or about 80:20, or about 50:50. Suitable polyetherketoneketones are available from several commercial sources under various brand names. For example, polyetherketoneketones are sold under the brand name KEPSTAN® polymers by Arkema. In addition to using polymers with a specific T:I ratio, mixtures of polyetherketoneketones may be employed. The powders used in the present invention may be produced directly by synthesis or by a variety of processes such as grinding, air milling, spray drying, freeze-drying, or direct melt processing to fine powders. Preferably, the powder is first produced and the heat treatment is performed. The heat treatment process and the powders produced by this process are not limited to any particular particle size. The particle size of the powder can be adjusted prior to or after the heat treatment process based on the needs of the specific application. In general, powders useful in the present invention may have a median volume average particle size/diameter of between 0.002 microns to 0.1 meter, and more preferably from 0.01 microns to 1.0 mm. For use in selective laser sintering (SLS), a median volume average particle size/diameter of 15 to 150 microns may be preferred, and more preferably from 30 to 75 microns. “Median volume average particle size” and “median volume average particle diameter” are used interchangeably herein. In accordance with certain non-limiting aspects of the present invention, PEKK flakes are ground to produce PEKK powders having a median volume average particle diameter of between about 10 microns and about 150 microns, as measured using the dry powder using laser light scattering methods known in the art such as such as ISO 13320:2009. As used herein, “powder” may refer to a material composed of small particles of PEKK. The PEKK powders can have a median volume average particle size of about 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, or about 150 microns. In preferred aspects, the PEKK powders have a median volume average particle size of about 30 microns to about 100 microns. In other preferred aspects, the PEKK powders have a median volume average particle size of about 50 microns. In accordance with certain non-limiting aspects of the present invention, polymorphic semicrystalline or crystallizable polymers of a variety of structures may be heat-treated in a way that increases, decrease, or adjusts the melting point or the shape of the melting peak of the crystals to afford better powder handling and durability in applications that require powder flow at elevated temperatures. The polymorphic semicrystalline or crystallizable polymers may be heat treated in a way that induces crystallization and/or converts the crystalline polymer into a different, thermodynamically more stable crystal form. In one non-limiting embodiment, the polymer comprises a polyetherketoneketone (PEKK) capable of having at least two crystalline forms. In this embodiment, it is possible that the PEKK is initially amorphous, but upon being subjected to heat treatment, at least a portion of the PEKK converts to at least one crystalline form, which form is capable of being converted at least in part to a higher melting crystalline form. The heat treatment step is then capable of increasing the content of the higher melting crystalline form by subjecting the polymer composition to a temperature below the melting point of the highest melting crystalline form and within or above the melting range of the other crystalline form(s), for a time that increases the content of the highest melting crystalline form relative to the other crystalline form(s) in the polymer composition. In yet another embodiment, a process is provided for increasing the content of one crystal form of polyetherketoneketone that includes at least the step of heat treating a polymer composition comprising another crystal form of polyetherketoneketone at a temperature within or above the melting range of the lower melting crystal form of polyetherketoneketone and below the melting point of the higher melting crystal form of polyetherketoneketone. In this embodiment, again it is possible that the starting polyetherketoneketone is initially amorphous, but upon heat treatment, at least a portion of the polyetherketoneketone converts to one or both other crystal forms. Powders which have been heat-treated using an apparatus according to the invention or a process according to the invention may be converted to useful articles or a coating on an article using any suitable or known method for converting polymer powders, including but not limited to selective laser sintering, roto-molding and powder coating. Invention Example 1: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PL PEKK powder (a product of Arkema, Inc.) having a median volume average particle size of about 50 microns. Filling: about 30% of the vessel in volume. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 293° C. A set point temperature of 293° C. led to a thermal treatment of the polymer powder at 285° C. The set temperature (293° C.) was reached after 2 hours and subsequently held for 3 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was then removed from the oven and then cooled with pulsed air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Invention Example 2: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PL PEKK powder having a median volume average particle size of about 50 microns. Filling: about 10% of the vessel. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 293° C. A set point temperature of 293° C. led to a thermal treatment of the powder at 285° C. The set temperature (293° C.) was reached within 2 hours and subsequently held for 3 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was removed from the oven and then cooled with pulse air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Invention Example 3: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PL PEKK powder having a median volume average particle size of about 50 microns. Filling: about 50% of the vessel. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 293° C. A set point temperature of 293° C. led to a thermal treatment of the powder at 285° C. The set temperature (293° C.) was reached within 2 hours and subsequently held for 3 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was removed from the oven and then cooled with pulse air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Invention Example 4: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PL PEKK powder having a median volume average particle size of about 50 microns. Filling: about 55% of the vessel in volume. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 293° C. A set point temperature of 293° C. led to a thermal treatment of the polymer powder at 285° C. The set temperature (293° C.) was reached after 2 hours and subsequently held for 5 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was then removed from the oven and then cooled with pulsed air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Invention Example 5: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PL PEKK powder (a product of Arkema, Inc.) having a median volume average particle size of about 50 microns. Filling: about 30% of the vessel in volume. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 281° C. A set point temperature of 281° C. led to a thermal treatment of the polymer powder at 275° C. The set temperature (281° C.) was reached after 2 hours and subsequently held for 3 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was then removed from the oven and then cooled with pulsed air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Invention Example 6: Rotatable Vessel Placed in an Oven Like that Shown in FIG.1 Material: Kepstan® 6002 PEKK powder (a product of Arkema, Inc.) having a median volume average particle size of about 70 microns. Filling: about 30% of the vessel in volume. Process: The vessel was filled with polymer powder at room temperature. The filled vessel was placed in the oven. The set point temperature of the oven was fixed at 293° C. A set point temperature of 293° C. led to a thermal treatment of the polymer powder at 285° C. The set temperature (293° C.) was reached after 2 hours and subsequently held for 3 hours. The vessel was moved by constant rotation at 10 rpm. The powder-filled vessel was then removed from the oven and then cooled with pulsed air. During this stage, rotation of the vessel was maintained. The treated powder can be sieved before future use. The median volume average particle size of the so-obtained powder was measured to be about 70+/−5 microns. Invention Example 7: Rotatable Vessel Placed in an Oven Like that Shown in FIG.2A Material: Kepstan® 6002 PL powder having a median volume average particle size of about 50 microns. Equipment: STP Lab 40 Rotomolder, using a parallelepiped chamber made of 4.76 mm 304 Stainless Steel having the following dimension: 431×431×675 mm3. Process: 20 kg of polymer powder was placed in the chamber, together with a stainless steel grid that is placed diagonally in the chamber, as shown inFIG.2B. The volume occupied by the powder was about 42% of the overall volume of the chamber. The oven temperature was set at 285° C. The chamber was placed in the oven, and continuously rotated along two axes as shown inFIG.2A. The temperature of the powder inside the mold was monitored with a rotolog and recorded. As shown inFIG.3, after about 2 hours, the powder reached the target temperature of 285+/−3° C., and then the powder was held3more hours at this temperature. It is noted thatFIG.3has been modified to smooth the curve depicting the powder temperature. The chamber was then removed from the oven and cooled using forced air. When discharged, the powder was a free-flowing powder with less than 15% of agglomerates that can easily be separated from the powder with sieving at 260 microns. The agglomerates were found to be easily breakable back to fine powder with mild pressure applied. No crust/agglomeration/melted product was found on the walls of the chamber. The median volume average particle size of the so-obtained powder was measured to be about 50+/−3 microns. Comparative Examples: KEPSTAN® 6002 PL polymer powder having a median volume average particle size of about 50 microns was used during testing in each of the comparative examples below. Comparative Example 1: Circulating Air Oven Following extensive testing, the efficiency and productivity of a circulating air oven was found to be low. Because of the poor heat transfer coefficient of the powder, only thin layers of powder were treated at a time which rendered the process too slow to be acceptable. In one example, a 5 cm thick layer of powder required about 7 hours before the inner temperature of the powder reached the temperature of the oven (e.g., 285 degrees Celsius). Comparative Example 2: Screw/Agitator/Stirring Device Some prior art references, such as U.S. Patent App. Pub. No. 20120364697, the disclosure of which is incorporated by reference herein in its entirety and for all purposes, mention the possibility of immersing a stirring device in the powder for circulating the powder. It is noted that the vessel in which the powder is contained remains stationary. In a comparison test, it was found that this method produced non-uniform heating of the powder and hot spots, which could lead to melted polymer, fouling on the walls of the equipment (thus slowing down the heat transfer process), agglomerated powder, and lumps in the heat treated powder. Comparative Example 3: Fluidized Bed In a fluidized bed heater, the powder is contained within a vessel and the outer walls of a vessel are heated. At the same time, a gas (such as air) is passed through the vessel at high enough velocities to suspend the solid in the gas stream. In a comparison test, it was found that the quantity of air required to generate powder fluidization did not allow the powder to reach the predetermined temperature for heat treatment before the powder exited the vessel. Comparative Example 4: Paddle Dryer Paddle dryers are mechanically agitated, indirect heat transfer devices that add or remove heat from a process mass. Paddle dryers can be used for indirect drying, heating, cooling, pasteurization, crystallizing, and reacting of powders and granules. During operation, the vessel and the paddles are heated, and the powder is distributed into the vessel. In a comparison test, it was found that powder did not reach the predetermined temperature (e.g., 260 degrees Celsius instead of the targeted temperature of 285 degrees Celsius) for heat treatment at least in part due to an insulating layer formed by agglomeration of powder positioned inside of the paddle dryer. Comparative Example 5: Vibrating Heat Treatment Unit In a vibrating heat treatment unit, a spiral tube is both vibrated and heated by electrical current. Powder is introduced at the bottom opening of the tube and, owing to the vibrations, the powder is transported inside of the heated spiral tube, becomes heated and exits through a top opening of the tube. In a comparison test, it was found that powder became lodged in the spiral and was not able to exit through the top opening. Aspects of the Present Invention Various illustrative aspects of the present invention may be summarized as follows:Aspect 1: A method for heat treating a powder of a polymorphic semi-crystalline or crystallizable polymer, the method comprising:heating the powder that is contained within an interior region of a vessel to a temperature that is less than a melting temperature of a highest melting crystalline form of the polymer; andmoving the vessel to cause the powder within the vessel to move with respect to the vessel.Aspect 2: The method of Aspect 1, wherein the powder comprises polyaryletherketone (PAEK), more preferably polyetherketoneketone (PEKK), and most preferably polyetherketoneketone (PEKK) having a T:I ratio of about 60:40.Aspect 3: The method of Aspect 2, wherein the powder comprises polyetherketoneketone (PEKK) and the heating step comprises heating the PEEK powder to a temperature of 230 degrees Celsius to 295 degrees Celsius or 260 degrees Celsius to 290 degrees Celsius.Aspect 4: The method of any of Aspects 1 to 4 further comprising laser sintering the heat-treated powder.Aspect 5: The method of any of Aspects 1 to 4, wherein the heating step comprises heating the powder to a temperature above a glass transition temperature (Tg) of the polymer.Aspect 6: The method of any of Aspects 1 to 5 further comprising either moving the powder through a sieve that is positioned within the vessel or moving mixing elements having a compact shape through the powder.Aspect 7: The method of any of Aspects 1 to 6 further comprising moving the powder through a sieve that is positioned within the vessel and rotating the sieve along with the vessel.Aspect 8: The method of any of Aspects 1 to 7 further comprising rotating the vessel in a single rotational direction about a first axis of the vessel.Aspect 9: The method of any of Aspects 1 to 8 further comprising rotating the vessel in two different rotational directions about the first axis of the vessel.Aspect 10: The method of any of Aspects 1 to 9 further comprising rotating the vessel in a first rotational direction about the first axis of the vessel, and before reaching one revolution of the vessel, rotating the vessel in a second rotational direction that is opposite to the first rotational direction.Aspect 11: The method of any of Aspects 1 to 10, further comprising rotating the vessel about a second axis of the vessel that is normal to the first axis.Aspect 12: The method of Aspect 11 further comprising simultaneously rotating the vessel about the first and second axes.Aspect 13: The method of any of Aspects 1 to 12, further comprising positioning the vessel within an oven.Aspect 14: The method of any of Aspects 1 to 13, wherein the vessel forms part of a roto-molding unit.Aspect 15: The method of any of Aspects 1 to 14 further comprising removing agglomerations from an interior surface of the vessel during movement of the vessel.Aspect 16: An apparatus for heat treating a powder of polymorphic semi-crystalline or crystallizable polymer, the apparatus comprising:a heating device for heating the powder to a temperature that is less than the melting temperature of a highest melting crystalline form of the polymer;a vessel that is exposed to heat produced by the heating device, the vessel defining an interior region for containing the powder; andmeans for moving the vessel to cause the powder within the vessel to move with respect to the vessel.Aspect 17: The apparatus of Aspect 16 further comprising a sieve positioned within the vessel for sieving the powder within the vessel upon movement of the vessel.Aspect 18: The apparatus of Aspect 17, wherein the sieve is a foraminous panel.Aspect 19: The apparatus of any of Aspects 16 to 18, wherein the means for moving is a motor shaft that is configured to rotate the vessel, the motor shaft being attached either directly or indirectly to the vessel.Aspect 20: The apparatus of any of Aspects 16 to 19, wherein the means for moving is a motorized roller that is positioned in contact with the vessel for rotating the vessel.Aspect 21: The apparatus of any of Aspects 16 to 20, wherein the vessel is a cylindrical tube.Aspect 22: The apparatus of any of Aspects 16 to 20, wherein the vessel is box shaped.Aspect 23: The apparatus of any of Aspects 16 to 22, wherein the heating device is an oven and the vessel is positioned within the oven.Aspect 24: The apparatus of any of Aspects 16 to 22, wherein the heating device is a heating element, and the heating element is connected to the vessel.Aspect 25: The apparatus of any of Aspects 16 to 24, wherein the apparatus is a roto-molding unit.Aspect 26: The apparatus of any of Aspects 16 to 25, further comprising means for removing agglomerations from an interior surface of the vessel. Within this specification, embodiments have been described in a way which enables a clear and concise specification to be written, but it is intended and will be appreciated that embodiments may be variously combined or separated without departing from the invention. For example, it will be appreciated that all preferred features described herein are applicable to all aspects of the invention described herein. Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention. | 43,828 |
11858174 | DETAILED DESCRIPTION OF THE INVENTION Multilayer plastics have at least a first layer and a second layer of different plastics, and may comprise additional layers of other plastics. The plastic layers are normally joined by an adhesive placed between two plastic layers. In this text the term ‘vacuum’ refers to both an absolute vacuum, in which the absolute pressure is zero, and a relative vacuum, in which the pressure is lower than a reference pressure. To specify when necessary which of the two vacuums is referred to in a phrase, the term “absolute” or “relative” is used explicitly. The multilayer plastics are shredded into multilayer plastic fragments by any mechanical means. In one embodiment the final size of the fragments, measured in their greater length, is not less than 10 mm. The superheated vapour causes heat shock that does not necessarily heat the entire particle to a specified temperature, and mechanical shock due to the sudden pressure change to which the plastic layers are subjected when passing from atmospheric pressure to working pressure, allowing a successful separation of the layers in the mechanical separation stage. The vapour exerts pressure on the fragments, weakening and breaking the chemical bonds between the layers. Since only vapour is heated, energy is saved and no chemical products are required. The temperature and pressure conditions inside the vessel and the discharge tank remain constant by means of the boiler, valves, pumps or other control means. This prevents energy losses and increases productivity, since the vessel and the tank are permanently ready to receive multilayer plastic fragments. The vessel can be connected to a discharge tank by a discharge valve or another control means. The pressurisation and vacuum cycles consist in the introduction of the multilayer plastic fragments in the vessel, as it contains superheated vapour at a specific temperature and pressure according to the types of plastics to treat, using a valve or other control means, keeping them inside for a predetermined time depending on the types of plastic to treat. Then, minimising the pressure and temperature losses using valves or other control means and making use of the pressure differences between the vessel and the discharge tank, the multilayer plastic fragments are transferred to the discharge tank, which is at a specific temperature and pressure conditions according to the types of plastics to treat and lower than the pressure and temperature conditions inside the vessel, keeping them inside for a predetermined time depending on the types of plastics to treat. These pressurisation and vacuum cycles can be repeated one or several times depending on the types of plastic and other properties of the multilayer plastic fragments, returning the multilayer plastic fragments through valves and other control means from the discharge tank to the vessel. As can be seen, the application of pressure and high temperature in the vessel is simultaneous by means of the action of the superheated vapour. Similarly, the application in the discharge tank of a vacuum and a low temperature is also simultaneous. This type of pressurisation and vacuum cycle increases the pressure difference withstood by the multilayer plastic fragments, compressing and decompressing the layers and generating tensile, compressive, and shear forces between them which further weaken the union between the layers. A further advantage of this pressurisation and vacuum cycle is that the layers dry during the vacuum stage, eliminating the subsequent drying stage. This temperature difference damages and breaks the unions between the plastic layers, and leads to structural and surface tensions between the unions due to the differences in the thermal expansion coefficients of the unions and the plastics. Moreover, the molecular structure of the plastics is modified, causing changes in volume of the unions and the plastic layers, which in turn further deteriorate the union. During the processing of the multilayer plastic fragments, specifically during their entry and outlet from the vessel, vapour may be lost. To correct this deviation from the predetermined conditions, a valve is provided that communicates the vessel and the boiler and is managed by a control system. The control system manages the opening of the valve to allow vapour inlet to the vessel in order to maintain the predetermined pressure and temperature conditions constant inside the vessel. During the processing of the multilayer plastic fragments, specifically during their entry and outlet from the discharge tank, superheated vapour from the vessel may enter said tank. To correct this deviation from the predetermined conditions, a valve and a pump are provided so that the valve connects the pump and the discharge tank. The pump maintains the predetermined pressure and temperature conditions inside it, extracting fluid from inside the discharge tank. To maximise the yield of the method, the fluid extracted by the pump from the discharge tank can be re-compressed and used in other parts of the process. The invention comprises a mechanical separation stage of the plastic layers of the fragments to obtain single-layer plastic fragments, which are therefore also single-component fragments. The unions have been weakened enough to allow a mechanical separation of the layers. This separation comprises the actions of cutting, brushing, polishing and rubbing the fragments. Finally, once the fragments have been separated into single-layer fragments, they are introduced into a unit for mechanical separation of the layers, where they are classified according to their composition making use of the different densities of the plastics, by placing the single-layer fragments in a controlled air stream. In one embodiment one of the layers of the multilayer plastics is polyethylene terephthalate (PET) and another of the layers is polyethylene (PE). However, the method is suitable for separating layers of any other plastic materials. With reference toFIGS.1and2, the vessel (1) comprises internal pressure and temperature sensors that provide data on these magnitudes to a control system. This control system controls these variables by acting on control valves, namely the vapour inlet control valve (10), discharge valve (2) of the vessel (1), and the discharge valve (20) of the hopper (9), as well as on the boiler (3), controlling its internal temperature. The composition of the superheated vapour is water. The boiler (3) produces steam by any heating method (such as electrical resistance, microwaves). The boiler (3) is controlled by the control system so that it produces superheated steam with the required characteristics for the types of plastic to treat. Initially, before introducing the first batch of fragments, the boiler (3) is turned on, producing superheated steam. The valve (10) is opened to allow the superheated steam to pass to the vessel (1) until reaching predetermined pressure and temperature conditions (pressure: 1-12 bar; temperature: 100-191.12° C.). These conditions are kept constant throughout the time of application of the separation method for all the fragments to treat. The discharge tank (4) is seamlessly connected to a pump (5) via a valve (21). The pump (5) maintains constant relative vacuum conditions of −0.7 bar to 0.1 bar (with respect to ambient pressure) inside the discharge tank (4), which are maintained during the application of the separation method for the entire quantity of fragments to treat. The temperature in the discharge tank (4) will be between 15-25° C. When the pressure and temperature conditions in the vessel (1) and in the discharge tank (4) are as predetermined, after obtaining the multilayer plastic fragments with a size greater than 10 mm from the multilayer plastics, they are introduced into the vessel (1) through an input hopper (9) by opening the valve (20) located at the outlet of the input hopper (9). The fragments are then kept for a specified time in the vessel (1), where they are subjected to the high temperature and pressure of the superheated steam. If the layers are made from PET and PE, this time is preferably between 10 seconds and 60 seconds. After this time, the discharge valve (2) of the vessel (1) is opened and the fragments pass to the discharge tank (4), where they are kept for a predetermined time (between 1 and 5 minutes) and subjected to the relative vacuum and lower temperature inside it. If the layers are made from PET and PE, this time is preferably 5 minutes. After the predetermined time in these relative vacuum conditions, the fragments are taken to a mechanical separation unit (6). The steam that may have entered the discharge tank (4) is extracted through a recovery valve (11) that connects it to a mechanical recompression machine (8). The resulting condensed water is filtered and reintroduced into the boiler (3). After the vacuum time has passed, the fragments are transferred to a mechanical separation unit (6). The mechanical separation unit (6), shown inFIG.3, comprises means for polishing, cutting, brushing and rubbing the multilayer plastic fragments against each other to obtain single-layer fragments made from a single type of plastic. These means consist of one or more drums (16) that rotate about an axis inside a chamber (17). Some drums (16) comprise a rough surface that peels and polishes the fragments. Other drums (16) comprise a surface covered with metal rods placed perpendicular to the surface of the drum. The cut is therefore made without requiring blades or any other sharp elements. The drums (16) are arranged in a straight line, one after the other, leaving a small space (19) between the point closest to its surface and one of the surfaces of the chamber (17) and this surface of the chamber (17), preferably a distance of 0.8 mm, to promote friction between the fragments and the surfaces of the drums (16). This space (19) can be adjusted by moving the roller towards or away from the surface of the chamber (17), or by moving the surface of the chamber (17) closest to the drums (16) towards or away from the drums (16). The row of drums (16) is arranged to form an angle close to 60° to the floor. The drums (16) comprise a barrier (18) placed between every two drums (16), the drums (16) defining the wall of the chamber (17) closest to them and the barriers (18) defining a separation volume (13) open only by an input hopper (12) located at the upper end of the chamber (17) and an outlet hopper (14) located at the bottom end of the chamber (17), so that the fragments move only under gravity from the input hopper (12) through the volume (13), without passing to another volume of the chamber (17). The input hopper (12) comprises a wall (15) that directs the fragments towards the separation volume (13). The fragments are introduced into the separation unit (6) from the input hopper (12), falling as a result of gravity through the chamber (17), specifically from the separation volume (13), receiving the action of the drums (16). As they fall, the layers are separated from each other and the remaining moisture is eliminated by the stirring action. Finally, once separated into single-layer fragments, they leave through the outlet hopper (14). The outlet hopper (14) is connected to the mechanical classification unit (7). The mechanical classification unit (7) comprises a vibration table, an aspirator and a cyclone. The particles are transferred to the table. The action of the vibration table in combination with a controlled suction by the aspirator separates the fragments according to their density, suctioning the lighter phase and leaving on the table the heavier phase, thereby separating the fragments according to their composition. | 11,878 |
11858175 | DESCRIPTION OF EMBODIMENTS Specific embodiments are explained hereinafter in detail with reference to the drawings. However, the present disclosure is not limited to the below-shown embodiments. Further, the following descriptions and the drawings are simplified as appropriate for clarifying the explanation. First Embodiment <Configuration of Continuous Kneading Apparatus> Firstly, a configuration of a continuous kneading apparatus and an injection molding apparatus including the continuous kneading apparatus according to a first embodiment will be described with reference toFIGS.1to3. Each ofFIGS.1to3is a schematic cross-sectional view showing a configuration of the continuous kneading apparatus and the injection molding apparatus including the continuous kneading apparatus according to the first embodiment. Note that, needless to say, right-handed xyz-orthogonal coordinates shown inFIGS.1to3are shown for the sake of convenience for explaining the positional relation among components. In general, the z-axis positive direction is the vertically upward direction and the xy-plane is a horizontal plane throughout the drawings. As shown inFIGS.1to3, the continuous kneading apparatus10according to the first embodiment includes a cylinder11, a screw12, a hopper13, ring-shaped heaters14, temperature sensors60, and a control unit70. In addition to the continuous kneading apparatus10, the injection molding apparatus includes a fixed die21and a movable die22. FIG.1shows the injection molding apparatus in a state immediately before a molten resin82is injected into a cavity C formed by the dies (the fixed die21and movable die22). FIG.2shows the injection molding apparatus in a state after the injection of the molten resin82into the cavity C of the dies has been completed. FIG.3shows the injection molding apparatus in a state when a resin molded article83is removed from the dies. The cylinder11is a cylindrical member extending in the x-axis direction. The screw12is disposed so as to extend in the x-axis direction, and is rotatably housed inside the cylinder11. Although not shown in the drawings, for example, a motor is connected to the screw12as a rotational driving source with a speed reducer interposed therebetween. Further, the screw12can be moved in the x-axis direction by an actuator (not shown). As shown inFIG.2, as the screw12moves forward in the X-axis negative direction, the molten resin82is injected into the inside of the dies (the fixed die21and movable die22). The hopper13is a cylindrical member for charging resin pellets81, which are a raw material for the resin molded article83shown inFIG.3, into the inside of the cylinder11. The hopper13is disposed in the upper side of an end part of the cylinder11on the positive side in the X-axis direction. The ring-shaped heaters14are arranged along the longitudinal direction (the x-axis direction) of the cylinder11so as to cover the outer peripheral surface of the cylinder11. In the example shown inFIGS.1to3, four ring-shaped heaters14are provided on the distal-end side (the negative side in the x-axis direction) of the hopper13. Each of the plurality of ring-shaped heaters14is individually controlled by the control unit70. Each of the temperature sensors60measures a temperature of a part of the cylinder11heated by a respective one of the plurality of ring-shaped heaters14. Each of the temperature sensors60is, for example, a thermocouple. In the examples shown inFIGS.1to3, each of the temperature sensors60is inserted into a through hole formed in a respective one of the ring-shaped heaters14, and is positioned so as to be in contact with the cylinder11. The control unit70learns a control condition(s) for each of the ring-shaped heaters14while performing feedback controlling for a respective one of the ring-shaped heaters14based on a temperature measured by a respective one of the temperature sensors60. More specifically, the control unit70controls the output of each of the ring-shaped heaters14so that a temperature measured by a respective one of the temperature sensors60gets closer to a set temperature (a target temperature). Note that the configuration and the operation of the control unit70will be described later in a more detailed manner. In the continuous kneading apparatus10according to the first embodiment, resin pellets81supplied from the hopper13are kneaded by the rotating screw12inside the cylinder11while being heated by the ring-shaped heaters14. Since the resin pellets81are heated and extruded (i.e., pressed) from the base of the screw12toward the tip thereof (in the x-axis negative direction), they are compressed and transformed into a molten resin82. The fixed die21is a die fixed to the tip of the continuous kneading apparatus10. Meanwhile, the movable die22is a die that is driven by a driving source (not shown) and can slide in the x-axis direction. As the movable die22moves in the x-axis positive direction and abuts on the fixed die21, as shown inFIG.1, a cavity C whose shape conforms to the shape of s resin molded article83to be manufactured (seeFIG.3) is formed between the fixed die21and the movable die22. Next, as shown inFIG.2, the screw12moves forward in the x-axis negative direction and the molten resin82is charged into the cavity C, so that the resin molded article83(seeFIG.3) is molded. Then, as shown inFIG.3, the screw12retreats in the x-axis positive direction and the movable die22moves in the x-axis negative direction and thereby is released (i.e., separated) from the fixed die21, so that the resin molded article83is removed. <Configuration of Control Unit70According to Comparative Example> A continuous kneading apparatus according to a comparative example has an overall configuration similar to that of the continuous kneading apparatus according to the first embodiment shown inFIGS.1to3. In the comparative example, the control unit70performs, by using PID control, feedback control for each of the ring-shaped heaters14based on a temperature acquired from a respective one of the temperature sensors60. In the case of the PID control, it is necessary to adjust a parameter(s) every time a process condition(s) is changed. In general, an operator adjusts the parameter(s) through trial and error, thus causing a problem that a large amount of time is taken and a large amount of resin material is required to adjust the parameter(s). <Configuration of Control Unit70According to First Embodiment> Next, the configuration of the control unit70according to the first embodiment will be described in a more detailed manner with reference toFIG.4.FIG.4is a block diagram showing the configuration of the control unit70according to the first embodiment. As shown inFIG.4, the control unit70according to the first embodiment includes a state observation unit71, a control condition learning unit72, a storage unit73, and a control signal output unit74. Note that each of the functional blocks constituting the control unit70can be implemented by hardware such as a CPU (Central Processing Unit), a memory, and other circuits, or can be implemented by software such as a program(s) loaded in a memory or the like. Therefore, each functional block can be implemented in various forms by computer hardware, software, or combinations thereof. The state observation unit71calculates a control error of each of the ring-shaped heaters14from a measured temperature value pv acquired from a respective one of the temperature sensors60. The control error is a difference between a target value and a measured value pv. Note that the target value is a target temperature set for each of the ring-shaped heaters14. Meanwhile, the measured value pv is a measured temperature value acquired from a temperature sensor60corresponding to the target ring-shaped heater14. Then, the state observation unit71determines, for each of the ring-shaped heaters14, a current state st and a reward rw for an action ac selected in the past (e.g., selected in the last time) based on the calculated control error. The state st is defined in advance in order to classify values of the control error, which can take any of infinite number of values, into a finite number of groups. As a simple example for an explanatory purpose, when the control error is represented by err, for example, a range “−4.0° C.≤err<−3.0° C.” is defined as a state st1; a range “−3.0° C.≤err<−2.0° C.” is defined as a state st2; a range “−2.0° C.≤err<−1.0° C.” is defined as a state st3; a range “−1.0° C.≤err<1.0° C.” is defined as a state st4; a range “1.0° C.≤err<2.0° C.” is defined as a state st5; a range “2.0° C.≤err<3.0° C.” is defined as a state st6; a range “3.0° C.≤err<4.0° C.” is defined as a state st7; and a range “4.0° C.≤err<5.0° C.” is defined as a state st8. In practice, in many cases, a larger number of states st each having a narrower range may be defined. The reward rw is an index for evaluating an action ac that was selected in a past state st. Specifically, when the absolute value of the calculated current control error is smaller than the absolute value of the past control error, the state observation unit71determines that the action ac selected in the past is appropriate and sets, for example, a positive value to the reward rw. In other words, the reward rw is determined so that the previously selected action ac is more likely to be selected again in the same state st as the past state. On the other hand, when the absolute value of the calculated current control error is larger than the absolute value of the past control error, the state observation unit71determines that the action ac selected in the past is inappropriate and sets, for example, a negative value to the reward rw. In other words, the reward rw is determined so that the previously selected action ac is less likely to be selected again in the same state st as the past state. Note that specific examples of the reward rw will be described later. Further, the value of the reward rw can be determined as appropriate. For example, the reward rw may have a positive value at all times, or the reward rw may have a negative value at all times. The control condition learning unit72performs reinforcement learning for each of the ring-shaped heaters14. Specifically, the control condition learning unit72updates a control condition (a learning result) based on the reward rw, and selects an optimum action ac corresponding to the current state st under the updated control condition. The control condition is a combination of a state st and an action ac. Table 1 shows simple control conditions (learning results) corresponding to the above-described states st1 to st8. In the example shown inFIG.4, the control condition learning unit72stores the updated control condition cc in the storage unit73, which is, for example, a memory, and updates the control condition cc by reading it from the storage unit73. TABLE 1st1st2st3st4st5st6st7st8−4.0~−3.0° C.−3.0~−2.0° C.−2.0~−1.0° C.−1.0~1.0° C.1.0~2.0° C.2.0~3.0° C.3.0~4.0° C.4.0~5.0° C.ac1−4.6−4.2−4.2−3.2−2.5+0.3+2.6+5.2−1.0%ac2−6.2−5.2+1.5+2.5+3.0+3.5+3.6+3.2−0.5%ac3−2.2−1.5+2.2+5.2+2.3+2.0+0.1−2.30%ac4+4.2+4.6+4.4+2.5−0.2−0.8−2.2−3.5+0.5%ac5+5.5+4.2+3.5+2.2−3.0−4.4−4.6−5.2+ 1.0% The Table 1 shows control conditions (learning results) by Q learning, which is an example of the reinforcement learning. The aforementioned eight states st1 to st8 are shown in the uppermost row in the Table 1. That is, the eight states st1 to st8 are shown in the second to ninth columns, respectively. Meanwhile, five actions ac1 to ac5 are shown in the leftmost column in the Table 1. That is, the five actions ac1 to ac5 are shown in the second to sixth rows, respectively. Note that, in the example shown in Table 1, an action for reducing the output (e.g., the voltage) to the ring-shaped heater14by 1.0% is defined as the action ac1 (Output Change: −1%). An action for reducing the output (e.g., the voltage) to the ring-shaped heater14by 0.5% is defined as the action ac2 (Output Change: −0.5%). An action for maintaining the output to the ring-shaped heater14is defined as the action ac3 (Output Change: 0%). An action for increasing the output to the ring-shaped heater14by 0.5% is defined as the action ac4 (Output Change: +0.5%). An action for increasing the output to the ring-shaped heater14by 1.0% is defined as the action ac5 (Output Change: +1.0%). The example shown in the Table 1 is merely a simple example for an explanatory purpose. That is, in practice, in many cases, a larger number of more detailed actions ac may be defined. A value determined by a combination of a state st and an action ac in the Table 1 is called a quality Q (st, ac). After an initial value is given, the quality Q is successively updated based on the reward rw by using a known updating formula. The initial value of the quality Q is included in, for example, the learning condition shown inFIG.4. The learning condition is input by, for example, an operator. The initial value of the quality Q may be stored in the storage unit73, and for example, a learning result in the past may be used as the initial value. Further, for example, the states st1 to st8 and the actions ac1 to ac5 shown in the Table 1 are included in the learning condition shown inFIG.4. The quality Q will be described by using the state st7 in the Table 1 as an example. In the state st7, since the control error is no lower than 3.0° C. and lower than 4.0° C., the heating temperature by the target ring-shaped heater14is too high. Therefore, it is necessary to reduce the output of the target ring-shaped heater14. Therefore, as a result of the learning by the control condition learning unit72, the qualities Q of the actions ac1 and ac2 for reducing the output to the ring-shaped heater14are larger. Meanwhile, the qualities Q of the actions ac4 and ac5 for increasing the output to the ring-shaped heater14are smaller. In the example shown in Table 1, for example, when the control error is 3.5° C., the state st falls in the state st7. Therefore, the control condition learning unit72selects the optimum action ac2 having the highest quality Q in the state st7, and outputs the selected action ac2 to the control signal output unit74. The control signal output unit74reduces a control signal ctr output to the ring-shaped heater14by 0.5% based on the action ac2 received from the control condition learning unit72. The control signal ctr is, for example, a voltage signal. Then, when the absolute value of the next control error is smaller than the absolute value 3.5° C. of the current control error, the state observation unit71determines that the selecting of the action ac2 in the current state st7 is appropriate, and outputs a reward rw having a positive value. Therefore, the control condition learning unit72updates the control condition so as to increase the quality +3.6 of the action ac2 in the state st7 according to the reward rw. As a result, in the case of the state st7, the control condition learning unit72continuously selects the action ac2. On the other hand, when the absolute value of the next control error is larger than the absolute value 3.5° C. of the current control error, the state observation unit71determines that the selecting of the action ac2 in the current state st7 is inappropriate, and outputs a reward rw having a negative value. Therefore, the control condition learning unit72updates the control condition so as to reduce the quality +3.6 of the action ac2 in the state st7 according to the reward rw. As a result, in the case of the state st7, when the quality of the action ac2 in the state st7 becomes smaller than the quality +2.6 of the action ac1, the control condition learning unit72selects the action ac1 instead of the action ac2. Note that the timing of the updating of the control condition is not limited to the next time (e.g., not limited to when the control error is calculated the next time). That is, the timing of the updating may be determined as appropriate while taking a time lag or the like into consideration. Further, in the initial stage of the learning, the action ac may be randomly selected in order to expedite the learning. Further, although the reinforcement learning by simple Q learning is described above with reference to the Table 1, there are various types of learning algorithms such as Q learning, AC (Actor-Critic) method, TD learning, and Monte Carlo method, and the learning algorithm is not limited to in any type of algorithms. For example, when the number of states st and actions ac increase and the number of combinations thereof explosively increases, the algorithm may be selected, such as using the AC method, according to the situation. Further, in the AC method, a probability distribution function is used as a policy function in many cases. The probability distribution function is not limited to the normal distribution function. For example, for the purpose of simplification, a sigmoid function, a soft max function, or the like may be used. The sigmoid function is a function that is used most commonly in neural networks. Because the reinforcement learning is one of the types of the machine learning that is the same as the neural network, it can use the sigmoid function. Further, the sigmoid function has another advantage that the function itself is simple and easily handled. As described above, there are various learning algorithms and functions to be used, and an optimum algorithm and an optimum function may be selected as appropriate for the process. As described above, the PID control is not used in the continuous kneading apparatus according to the first embodiment. Therefore, to begin with, there is no need to adjust a parameter(s) which would otherwise be necessary when a process condition is changed. Further, the control unit70updates the control condition (the learning result) based on the reward rw through the reinforcement learning, and selects an optimum action ac corresponding to the current state st under the updated control condition. Therefore, even when a process condition(s) is changed, it is possible reduce the time taken for the adjustment and the amount of a resin material required therefor as compared to those in the comparative example. Note that the application of the continuous kneading apparatus10according to the first embodiment is not limited to those for injection molding apparatuses. That is, the continuous kneading apparatus10may also be used in extrusion molding apparatuses. In the case of an extrusion molding apparatus, since the injecting operation in the continuous kneading apparatus10is unnecessary, the screw12does not have to be movable in the x-axis direction. The rest of the configuration in the continuous kneading apparatus10in the injection molding apparatus and that in the extrusion molding apparatus are roughly similar to each other. <Control Method for Continuous Kneading Apparatus> Next, a method for controlling the continuous kneading apparatus according to the first embodiment will be described in detail with reference toFIG.5.FIG.5is a flowchart showing a method for controlling the continuous kneading apparatus according to the first embodiment. The following description will be given while referring toFIG.4as appropriate as well as referring toFIG.5. Firstly, as shown inFIG.5, the state observation unit71of the control unit70shown inFIG.4calculates, for each ring-shaped heater14, a control error from a temperature measured by a respective one of the temperature sensors60. Then, based on the calculated control error, the state observation unit71determines a current state st and a reward rw for an action ac selected in the past (Step S1). Note that, at the start of the control, since there is no action ac selected in the past (e.g., no action ac selected in the last control) and hence it is impossible to determine the reward rw. Therefore, only the current state st at the start of the control is determined. Next, as shown inFIG.5, the control condition learning unit72of the control unit70updates a control condition, which is a combination of a state st and an action ac, based on the reward rw. Then, the control condition learning unit72selects an optimum action ac corresponding to the current state st under the updated control condition (Step S2). Note that, at the start of the control, the control condition is not updated and remains as the initial value, but the optimum action ac corresponding to the state st at the start of the control is selected. Then, as shown inFIG.5, the control signal output unit74of the control unit70outputs a control signal ctr to the ring-shaped heater14based on the optimum action ac selected by the control condition learning unit72(Step S3). When the manufacturing of the resin molded article83has not been completed yet (Step S4No), the process returns to the step S1and the control is continued. On the other hand, when the manufacturing of the resin molded article83has been completed (Step S4YES), the control is finished. That is, the steps S1to S3are repeated until the manufacturing of the resin molded article83is completed. As described above, the PID control is not used in the continuous kneading apparatus10according to the first embodiment. Therefore, to begin with, there is no need to adjust a parameter(s) which would otherwise be necessary when a process condition(s) is changed. Further, the control condition (the learning result) is updated based on the reward rw through the reinforcement learning using a computer, and an optimum action ac corresponding to the current state st is selected under the updated control condition. Therefore, even when a process condition(s) is changed, it is possible reduce the time taken for the adjustment and the amount of a resin material required therefor as compared to those in the comparative example. Second Embodiment Next, a continuous kneading apparatus according to a second embodiment will be described with reference toFIG.6. The overall configuration of the continuous kneading apparatus according to the second embodiment is similar to that of the continuous kneading apparatus according to the first embodiment shown inFIGS.1to3, and therefore the description thereof will be omitted. The configuration of the control unit70in the continuous kneading apparatus according to the second embodiment differs from that in the continuous kneading apparatus according to the first embodiment. FIG.6is a block diagram showing the configuration of the control unit70according to the second embodiment. As shown inFIG.6, the control section70according to the second embodiment includes a state observation unit71, a control condition learning unit72, a storage unit73, and a PID controller74a. That is, the control unit70according to the second embodiment includes the PID controller74aas the control signal output unit74in the control unit70according to the first embodiment shown inFIG.4. The PID controller74ais also an example of the control signal output unit. Similarly to the first embodiment, the state observation unit71determines, for each ring-shaped heater14, a current state st and a reward rw for an action ac selected in the past based on the calculated control error err. Then, the state observation unit71outputs the current state st and the reward rw to the control condition learning unit72. Further, the state observation unit71according to the second embodiment outputs the calculated control error err to the PID controller74a. Similarly to the first embodiment, the control condition learning unit72also performs reinforcement learning for each ring-shaped heater14. Specifically, the control condition learning unit72updates a control condition (a learning result) based on the reward rw, and selects an optimum action ac corresponding to the current state st under the updated control condition. Note that, in the first embodiment, the output to the ring-shaped heater14is directly changed according to the content (i.e., the details) of the action ac selected by the control condition learning unit72. In contrast, in the second embodiment, a parameter(s) of the PID controller74ais changed according to the content (e.g., the details) of the action ac selected by the control condition learning unit72. As shown inFIG.6, the parameter of the PID controller74ais successively changed based on the action ac output from the control condition learning unit72. Meanwhile, the PID controller74aoutputs a control signal ctr to the ring-shaped heater14based on the control error err received from the state observation unit71. The control signal ctr is, for example, a voltage signal. The rest of the configuration is similar to that of the first embodiment, and therefore the description thereof will be omitted. As described above, in the continuous kneading apparatus according to the second embodiment, PID control is used, so that it is necessary to adjust a parameter(s) when a process condition(s) is changed. In the continuous kneading apparatus according to the second embodiment, the control unit70updates the control condition (the learning result) based on the reward rw through the reinforcement learning, and selects an optimum action ac corresponding to the current state st under the updated control condition. Note that the action ac in the reinforcement learning is to change a parameter of the PID controller74a. Therefore, even when a process condition(s) is changed, it is possible to reduce the time taken for the adjustment of the parameter and the amount of a resin material required therefor as compared to those in the comparative example. In the above-described examples, the program includes instructions (or software codes) that, when loaded into a computer, cause the computer to perform one or more of the functions described in the embodiments. The program may be stored in a non-transitory computer readable medium or a tangible storage medium. By way of example, and not a limitation, non-transitory computer readable media or tangible storage media can include a random-access memory (RAM), a read-only memory (ROM), a flash memory, a solid-state drive (SSD) or other types of memory technologies, a CD-ROM, a digital versatile disc (DVD), a Blu-ray disc or other types of optical disc storage, and magnetic cassettes, magnetic tape, magnetic disk storage or other types of magnetic storage devices. The program may be transmitted on a transitory computer readable medium or a communication medium. By way of example, and not a limitation, transitory computer readable media or communication media can include electrical, optical, acoustical, or other forms of propagated signals. From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims. | 27,209 |
11858176 | DETAILED DESCRIPTION FIG.1shows a pelletizing apparatus2, which is configured here and preferably as an underwater pelletizing apparatus; the embodiments according to the invention can also be used in other pelletizing apparatus or methods, however. Pelletizing apparatus2has a driver6that provides driving power to an underwater pelletizer14. Pelletizing apparatus2also has a protective cover16. Liquid plastic melt is typically fed to die assembly4by means of an extruder (not shown in the Figures). Die assembly4has a pressure regulating device26and a die unit28. The melt is fed to pressure regulating device26and regulated in respect of melt pressure, in particular, depending on the melt material, its viscosity and intended throughput, and fed to die unit28. Die unit4is heated electrically or by means of a heating fluid. Process water can also be introduced into die assembly4by means of a process water inlet24and can leave it via process water outlet12. During operation, the melt exits in the form of melt strands (not shown inFIG.1) from die assembly4or die unit28in the direction of the underwater pelletizer14and is first divided into strand sections by means of a cutting device (not shown); the cutting device is preferably designed with rotating cutting blades. These melt strand sections come into contact with a coolant, in particular water, in the underwater pelletizer14and are cooled abruptly. The melt strands are cut and form granules that are separated from the water as pellets later in the process. Driver6is used to drive the cutting device which is provided for separating the melt strands into strand sections. The assembly comprising driver6, underwater pelletizer14and die assembly4with die unit28and pressure regulating device26is mounted on a machine baseplate20. The latter, for its part, is coupled by means of spacer elements22to a baseplate18, which for its part is connected to a housing8. Housing8, for its part, is mounted on a skid mount10, which has rollers, for example, for making it easier to position pelletizing apparatus2. FIG.2shows die assembly4as shown inFIG.1, but separated from pelletizing apparatus2. Die assembly4includes pressure regulating device26and die unit28. Die unit28contains a die member38and a die plate40. Die plate40has die orifices42from which melt strands exit die unit28. Pressure regulating device26is coupled to die unit28. Pressure regulating device26has a base member30and a housing section31. Melt enters base member30at fluid inlet side32.FIG.2also shows actuators34which allow the free cross-section of flow in a section of pressure regulating device26to be influenced by means of actuating nuts36, and which thus allow the melt pressure to be influenced indirectly. InFIG.3, pressure regulating device26is now shown without die unit28, and the fluid outlet side48of pressure regulating device26can now be seen. A flow channel46is formed inside base member30of pressure regulating device26. In the present embodiment, the flow channel is defined in the region of fluid outlet side48by a sleeve44which can be moved translationally. By moving the sleeve44, it is possible, in combination with die unit28not shown here, to influence the free cross-section of flow in the region of fluid outlet side48, as shown in detail in the following Figures. FIGS.4to5show sectional views of the die assembly shown inFIG.2. As already mentioned, die assembly4comprises pressure regulating device26and die unit28. Here, die unit28consists here of a die member38into which die member flow channels are introduced. Die unit28consists of a die member38in which die member flow channels are introduced. A guide cone58is attached to die member38. The guide cone is centered by centering pin54, in particular, and coupled to die member38by means of a cone fastening screw56. A die plate40, which has die orifices42(seeFIG.2) from which melt strands exit from the apparatus, is mounted on the outlet side of die member38. Pressure regulating device26is coupled to die unit28. The pressure regulating device has a base member30in which a flow channel46is formed. Here, flow channel46is centered relative to the longitudinal axis in the middle of base member30. An annular channel50is formed in the outlet region of pressure regulating device26by the interaction of flow channel46with the guide cone58of die unit28. In order to influence the free cross-section of flow in this annular channel section50, a sleeve44with a regulating section52is arranged in the region thereof. Sleeve44is mounted translationally movably along the longitudinal axis of base member30. If sleeve44with regulating section52is moved in the direction of die member38, the free cross-section of flow in annular channel section50is narrowed. If, however, sleeve44is moved in the opposite direction away from die member38, the free cross-section of flow is increased, although the free cross-section of flow cannot become greater overall than the region of annular channel section50defined by the interaction of guide cone58and base member30. A housing section31is arranged on the fluid outlet side48of pressure regulating device26and extends substantially annularly around base member30and sleeve44. Housing section31is additionally connected to base member30by means of bolt62. Bolt62is screwed in sections into base member30at the end facing away from the bolt head and is fastened to base member30by means of fastening nut66. The preferred plurality of bolts62thus provide an additional connection between base member30and housing section31. Bolts62are received in housing section31and are fastened to the housing section by means of fastening nuts64. Sleeve44has bores with a diameter that matches the diameter of bolt62. Sleeve44also has a recess for insertion of an actuating nut36. Sleeve44is slid onto bolts62, and nut36is screwed onto bolt62. Due to the shape of the section for receiving actuating nuts36, actuation of the actuating nuts36causes sleeve44, which is in contact with actuating nut36, to move translationally when actuating nut36is rotated, if housing section31is fixed in position relative to base member30. The position of sleeve44can thus be adjusted translationally by rotating the actuating nut36associated with an actuator34. The free cross-section of flow in annular channel section50can thus be influenced by interaction with regulating section52of sleeve44. The melt pressure is regulated indirectly by this adjustment of the free cross-section of flow in annular channel section50. The range of movement of sleeve44is limited by a first abutment shoulder70and a second abutment shoulder72. FIG.5shows an operating state of die assembly4, in which sleeve44has been moved translationally in the direction of die member38. The free cross-section of flow50is now restricted at the direct transition to die member38of die unit28. Such a restriction of the free cross-section of flow in annular channel section50can be used, for example, to increase the pressure of the melt compared to the state shown inFIG.4. The structure of die assembly4as shown inFIGS.6to7is essentially based on the structure known fromFIGS.4and5. However, sleeve44or regulating section52of sleeve44has a number of pins74. In addition to positioning regulating section52, positioning pins74also offers a way of influencing the free cross-section of flow and thus indirectly the pressure conditions in annular channel section50, specifically, and also in die member flow channels60. Pins74may be screwed or glued to sleeve44, for example, or inserted into the sleeve by a press fit. Alternatively, pins74and sleeve44may be integrally formed. The total number of pins74is variable and may also be adapted to a material to be processed, to a respective viscosity or to a desired material throughput. FIG.6shows the state in which sleeve44, including pins74, is in a position moved away from die member38. In this operating position, regulating section52does not restrict the free cross-section of flow in annular channel section50, but pins74are already inserted at least partially into annular channel section50and into die member flow channels60. In the state shown inFIG.7, sleeve44has now been moved translationally in the direction of die member38. This results in regulating section52restricting the free cross-section of flow in the annular channel section50, while at the same time pins74restrict the free cross-section of flow in the region of annular channel section50and additionally in the region of die member flow channels60. An alternative embodiment of a die assembly104is shown inFIG.8. Die assembly104includes an alternative embodiment example of a pressure regulating device126as well as die unit28that is already known. Die unit28has a die member38and a die plate40with die orifices42. Pressure regulating device126is connected to die unit28and has a base member130to which a coupler188with a hand lever182is attached. Moving hand lever182along the circumference of base member130results in a change in the free cross-section of flow in annular channel section150or in die member flow channels60, as can be seen in detail from the following Figures. FIG.9, for example, shows pressure regulating device126, which is mounted on die unit28. Pressure regulating device126has a base member130in which a flow channel146is arranged. In combination with guide cone58of die unit28, flow channel146forms a annular channel section150. A regulating ring186is arranged in said annular channel section150. The free flow cross section in annular channel section150can be influenced, specifically, by a translational movement of said regulating ring186in the direction of die unit28(or in the opposite direction). A movement of regulating ring186in the direction of die unit28results in a reduction of the free cross-section of flow in annular channel section150. This allows indirect influence to be exerted on the pressure conditions of a melt in this region. Regulating ring186is arranged on a retaining ring184. The respective components may be glued together, for example, or screwed together or connected in some other way, and if necessary may also be integrally formed. An actuating element176is attached form-fittingly or force-fittingly to retaining ring184. A plurality of actuating elements176are typically attached to retaining ring184, although only one is shown here due to the sectional view. Actuating element176is connected, in turn, to a plunger178, which has a threaded portion at its end opposite retaining ring184, onto which threaded portion an actuating element180is placed. The range of movement of actuating element180is limited on one side by base member130and on the other side by a cap ring190. Translational movement of actuating element180is thus inhibited, with the consequence that rotation of actuating element180causes plunger178to move translationally in the direction of die member38or away from it. As regulating ring186is connected indirectly to plunger178, any rotation of actuating element180will cause a translational movement of regulating ring186, with which the free cross-section of flow in annular channel section150can then be regulated. As already mentioned, pressure regulating device126preferably has a plurality of plungers178, in particular three. In order to facilitate a uniform translational movement of the plurality of plungers178, actuating elements180are preferably provided in the form of gear wheels that match a coupler188configured as an internal gear, in particular. Rotation of coupler188along the circumference of base member130results in uniform movement of the plurality of actuating elements180, thus ensuring that regulating ring186is moved uniformly and as purely translationally as possible in the direction of die member38or away from it. A hand lever182is provided on coupler188to facilitate manual operation of coupler188. FIGS.10and11show different operating states of the die assembly104shown inFIG.9.FIG.10shows an operating state in which regulating ring186has been moved as far as possible in a direction away from die member38. The free cross section of flow in annular channel section150is thus maximized. In contrast,FIG.11shows a state in which regulating ring186has been moved as far as possible toward die member38. In this operating state, the free flow cross section in annular channel section150is minimized. However, a certain free cross-section of flow always remains between regulating ring186and annular channel section150. FIG.12shows the die assembly ofFIGS.8to11, but with pins174arranged on regulating ring186to influence the free cross-section in annular channel section150or in die member flow channels60. Pins174may be connected to retaining ring184or regulating ring186in different ways. The components can be screwed, glued, otherwise connected, or integrally formed, for example. The number of pins174is also variable, as are their geometry and length. Referring now toFIG.12, any actuation of coupler188will now cause actuating element180to likewise rotate. As actuating element180is held in position by base member130and cap ring190, rotation of actuating element180will result in plunger178being moved translationally, either in the direction of die member38or away from it, depending on the direction of rotation. As a plurality of pins174are arranged on retaining ring184or regulating ring186, these are moved in the direction of annular channel section150and in the direction of die member flow channels60, or away from them. Pins174specifically allow a further reduction of the free cross-section of flow in annular channel section150and in particular in die member flow channels60, so that the melt pressure in a region in the immediate vicinity of die plate40can be influenced in a targeted manner. The aforementioned operating states are illustrated inFIGS.13and14. InFIG.13, regulating ring186together with pin174has been moved away from die member38, whereas inFIG.14the aforementioned components have been moved by the maximum amount in the direction of die member38. As can be seen fromFIG.14, in particular, pins174cause die member flow channels60to be filled almost completely by pins174, thus minimizing the remaining free cross-section of flow in die member flow channels60. Another embodiment of a die assembly204is shown inFIG.15. Die assembly204has a pressure regulating device226and a die unit28. Die unit28has a die member38and a die plate40with die orifices42. Pressure regulating device226, which has a retaining ring292, a connecting ring294and pins274, is mounted on said die unit28. The pressure regulating device also has an actuating nut236. A plurality of actuating nuts236, in particular three actuating nuts236, are preferably provided on pressure regulating device226. The structure of pressure regulating device226can be seen fromFIGS.16to20. Pressure regulating device226has a base member230in which a flow channel246is formed. In a region between pressure regulating device226and die unit28, an annular channel section250is formed in conjunction with guide cone58of die unit28. A plurality of pins274, which can be moved translationally in the direction of die unit28or away from it, project into annular channel section250. Pins274are guided section-wise in base member230and are mounted with their head in a mounting ring292. A translational movement of mounting ring292thus results in a translational movement of pins274as well. Mounting ring292is connected to base member230and to the connecting ring294by one, preferably several, actuating nuts236. Here, rotation of actuating nut236causes mounting ring292to move translationally in the direction of die unit28or away from it, depending on the direction of rotation. As pins274are accommodated in mounting ring292, they are moved analogously in a translational manner. By actuating or rotating actuating nuts236, it is thus possible to move pins274translationally into annular channel section250or into die member flow channels60and to move them back out of them. The different operating states of die assembly204can be seen fromFIGS.17to18. In the state shown inFIG.17, pins274have been moved the maximum distance away from die member38. This means that pins274extend only into the region of annular channel section250and slightly into die member flow channels60. A larger free flow cross section remains in the region of annular channel section250and die member flow channels60. In the state shown inFIG.18, pins274have been moved translationally in the direction of die member38by the maximum amount. As can be seen fromFIG.18, the remaining free cross-section of flow, especially in die member flow channels60, is now minimized. FIGS.19and20show alternative embodiments with regard to the configuration of pins296, which are now longer compared to those in the previous embodiments. Depending on the operating state, pins296accordingly extend further into die member flow channels60, in particular, as a result of which the free cross-section of flow and thus indirectly the melt pressure in the immediate vicinity of die plate40can be influenced. FIG.21shows a final embodiment of a die assembly304. Die assembly304comprises the pressure regulator26shown inFIGS.2to5, with an alternative embodiment of a die unit328. A detailed description of pressure regulator26is dispensed with here, and reference is made to the embodiment above. The alternative embodiment of die unit328is characterized by a die member338in which die member flow channels360are formed in a known manner. There is also a guide cone358arranged on die member338, said guide cone being mounted by means of a cone fastening screw356to die member338. A heating ring398used to heat die unit328is arranged around die member338. It can be clearly seen here that pressure regulating device26can be combined with many different die units328. The die unit may be formed as a two-part die unit as described inFIG.21, or as an integral die unit as described inFIGS.1to20. The die unit can also be heated in many different ways, for example by means of an electric current, a heating fluid or by steam or the like. FIG.22shows a die unit428and a pressure regulating device426mounted on the die unit. Pressure regulating device426has a fluid inlet side432where fluid can enter pressure regulating device426via flow channel446. Pressure regulating device426also has a base member430on which a first housing ring488and a second housing ring490are arranged. An inlet/outlet for pressurized fluid484is arranged in the first housing ring488. A second inlet/outlet for pressurized fluid486is arranged on the second housing ring490. The functional principle is illustrated with reference toFIGS.23and24. As can be seen fromFIG.23, the inlets and outlets for pressurized fluid484,486are connected to a cylinder chamber496located in the second housing ring490. A piston494connected to pins474is also arranged in cylinder chamber496. A bellows492is used to seal piston494. If pressurized fluid is now introduced into cylinder chamber496via the inlet/outlet for pressurized fluid486, this causes piston494to be moved to the right in the plane of the drawing. Due to the direct coupling between piston494and pin474, this causes pin474or the plurality of pins474to be moved at least partially into die member flow channels460. Due to the positioning of pins474relative to cylinder chamber496, the flow cross-section in die member flow channels460and in annular channel section450is regulated. As shown inFIG.23, die member438has a die plate440and is connected to a guide cone458by means of a cone fastening screw456and a centering pin454. FIG.24shows an operating state of pressure regulating device426, in which pins474are in a state that narrows annular channel section450less than inFIG.23. Piston494can be moved to the left—in the drawing plane—by introducing pressurized fluid via an inlet/outlet for pressurized fluid484into the cylinder chamber496on the side of piston494facing away from bellows492. Provided that pressurized fluid can flow out of inlet/outlet486, introducing pressurized fluid via inlet/outlet484causes piston494to move to the left in the plane of the drawing, and pins474coupled to piston494to move to the left and thus at least partially out of annular channel section450and die member flow channels460. FIGS.25and26show alternative embodiments of die assembly4already described with reference toFIGS.4and5. The embodiment shown inFIGS.25and26differs specifically from the one shown inFIGS.4-5by the shape of regulating section52a, which is concave inFIGS.25and26, and by the shape of die member38a, which has a convex flow channel in the region of annular channel section50inFIGS.25and26. FIGS.27and28show another alternative embodiment of a regulating section52band die member28b. InFIGS.27and28, regulating section52bis now convex in shape. In the region of regulating section52b, die member38bis correspondingly concave in shape. For the rest, reference is made to the description ofFIGS.4-5. FIGS.29and30show another alternative embodiment of a die unit528. Die unit528has a die member538which is connected to a further member530. In combination with an axially adjustable guide cone558, base member530forms an annular channel section550. The free cross-section of flow in annular channel550can be influenced by moving the axially adjustable guide cone558translationally relative to base member530. Guide cone558is axially adjusted as follows: The axially adjustable guide cone558is initially guided in a translationally movable manner relative to die member538by means of a conical guide592. Guide cone558is adjusted fluidically here. To that end, die member538is configured in such a way that it forms a first pressure chamber580in combination with a pressure chamber ring590. If pressurized fluid is introduced into the first pressure chamber580, the axially adjustable guide cone558is thus moved to the left in the plane of the drawing. A second pressure chamber582which is sealed against a distributor section854by means of a sealing ring586is also formed in guide cone558. If pressurized fluid is now introduced into the second pressure chamber582by means of inlet/outlet588, this results in the axially adjustable guide cone558moving to the right in the plane of the drawing, and in the free cross-section of flow being reduced in the region of annular channel section550. In the same way, introducing pressurized fluid into the first pressure chamber580causes the axially adjustable guide cone558to move to the right in the plane of the drawing. The result is that the free cross-section of flow is increased in the region of annular channel section550. Cone558has a trapezoidal section596on its side facing annular channel section550, for influencing the cross-section of flow in annular channel section550. An alternative embodiment of a die unit528a, which likewise implements the basic principle of an axially adjustable guide cone558a, is shown inFIGS.31and32. A bellows594is arranged between the axially adjustable guide cone558aand die unit538a. If the axially adjustable guide cone is in an extended state, as shown inFIG.31, in which the axially adjustable guide cone558ahas been moved to the left in the plane of the drawing, bellows594rests tightly against a transition area between the axially adjustable guide cone558aand die unit538a. This means that the free flow cross-section of annular channel section550is not constricted by bellows594. In the state shown inFIG.32, however, the axially adjustable guide cone558ais in a retracted state, i.e., it is moved to the right in the plane of the drawing, in comparison withFIG.31. If the axially adjustable guide cone558ais in the respective state, bellows594is compressed, which causes it to arch into the region of the annular channel section. This in turn causes a reduction in the free cross-section of flow in the region of annular channel section550. The free cross-section of flow in the region between base member530and the axially adjustable guide cone558ais additionally adjusted by moving guide cone558atranslationally relative to base member530. An alternative mechanical adjusting device for axially adjusting a guide cone658is shown inFIGS.33and34. Guide cone658is initially guided in an axially adjustable manner inside die member638. In the region of its longitudinal axis, die member638has a bore into which a set screw696is inserted. Set screw696can be actuated from outside the device, in particular from the die plate640side. A nut698is fitted on set screw696. The axially adjustable guide cone658also has a bore arranged in the region of its longitudinal axis, which bore has an internal thread in which an external thread applied to set screw696can engage. Rotatability of the axially adjustable guide cone658is inhibited by a centering pin654, so any rotation of set screw696results in the axially adjustable guide cone undergoing a translational movement in the axial direction, depending on the direction of rotation. The flow cross-section between guide cone658and base member630is influenced by the axial position of guide cone658. FIGS.35and36show a further embodiment in which mechanical axial adjustment of a guide cone758is likewise performed. However, the respective adjusting element or adjusting pin796is no longer in the region of a die plate740, but extends radially outwards from a die member738. To that end, valve pin796is arranged in a radial recess in die member738. On its inwardly facing side, adjusting pin796has a gear section782, in combination with a rotating body798. By means of gear section782, a rotational movement of valve pin796is transferred to rotating member798in such a way that the latter can now be rotated about an pivot axis that corresponds substantially to the longitudinal axis of die member738. Rotating member798is guided by means of a ball bearing786. The position of rotating member798on a receiving portion784of die member738is fixed, in addition, by a lock ring788. An external thread780which engages with a guide cone internal thread778is also provided on some regions of rotating member798. This has the effect that any rotational movement of rotating member798will change the axial position of the axially adjustable guide cone758. This change in position of the axially adjustable guide cone758will in turn cause a change in the flow cross-section between base member730and the axially adjustable guide cone758. Another alternative embodiment of a guide cone858that is fluidically adjustable in the axial direction is shown inFIGS.37and38. The axially adjustable guide cone858is mounted axially movably in a die member838and is axially adjustable by means of fluid which can be introduced into a first pressure chamber880and a second pressure chamber882. Axial adjustment of guide cone858causes a change in the flow channel between a base member830and the axially adjustable guide cone858. The free cross-section of flow in annular channel section850is not changed by axial adjustment of guide cone858. FIG.39shows a die unit928which has a die member938in which throttle pins996extend radially inwards. A guide cone958is arranged on die member938.FIGS.40and41show sectional views of a region in die member938. Die member flow channels960extend here through die member938. These channels conduct fluid from a flow channel960adjacent guide cone958to die plate940. Throttle pins996are arranged in die member938to regulate the free cross-section of flow in die member flow channels960. These throttle pins996can be actuated from outside die member938and restrict the free cross-section of flow of die member flow channels960depending on how far throttle pins996are inserted into die member flow channels960. In the state shown inFIG.40, throttle pin996restricts die member flow channel960only partially. In the state shown inFIG.41, throttle pin996is inserted almost completely into die member flow channel960and restricts it almost completely. Another alternative embodiment of a die unit1028is shown inFIG.42. Die unit1028again has a die member1038in which slider rods1096are inserted. A guide cone1058is also arranged on die member1038. The manner of operation of die unit1028can be seen inFIGS.43-45. As shown inFIGS.43and44, slider rods1096are coupled to slider elements1098. These slider elements1098have slider bores1084and are movably arranged inside a slider chamber1082. In the operating state shown inFIG.43, slider bores1084are arranged so that they overlap die member flow channels1060. This means that slider elements1098do not influence or only slightly influence the free cross-section of flow of die member flow channels1060. In the state shown inFIG.44, slider element1098has now been moved with the aid of slider rods1096in such a way that slider bores1084only partially overlap die member flow channels1060. The free cross-section of flow is thus influenced by positioning slider elements1098.FIG.45shows the state shown inFIG.44in an alternative cross-sectional plane. It can be seen here also that die member flow channels1060are locally restricted by the position of slider element1098, such that the free cross-section of flow is influenced. Another alternative embodiment is shown inFIGS.46-50. Referring now toFIG.46, a die unit1128has a pressure regulating device1126. Die unit1128has a base member1130. Adjusting screws1196are inserted into pressure regulating device1126. FIGS.47and48show sectional views of die unit1128in different operating states. Die member1138has die member flow channels1160. A guide cone1158is connected to die member1138. The connection is made by means of a centering pin1154and a cone fastening screw1156. Pressure regulating device1126is coupled to die member1138, which has a base member1130in which a flow channel1146is formed in combination with guide cone1158of die unit1128. Throttle elements1198are arranged in the region of said flow channel1146between base member1130and guide cone1158. Throttle elements1198are rotatably arranged on pivot axis1194. By means of an adjusting screw1196that acts on throttle element1198, throttle element1198can be pivoted into the region of flow channel1146between guide cone1158and base member1130, thus restricting the free cross-section of flow in said region, depending on the position of throttle element1198. In the state of throttle element1198as shown inFIG.47, flow channel1146is not or only very slightly restricted. In the state shown inFIG.48, however, throttle element1198extends almost completely into the flow channel formed between base member1130and guide cone1158. The effects of positioning throttle elements1198are illustrated additionally inFIGS.49and50. However, the throttle elements1198shown inFIGS.49and50do not have any rotational axes in comparison to the throttle elements shown inFIGS.47and48. FIGS.51and52show an alternative embodiment of a die unit1228. Die unit1228has a die member1238. A slider adjustment device1296is arranged about a longitudinal axis of die member1238. Slider adjustment device1296is rotatably mounted. Slider elements1298each having slider bores1284are coupled to slider adjustment device1296. In the operating state shown inFIG.51, the slider bores1284of slider adjustment device1296are arranged such that they overlap and are thus in alignment with die member flow channels1260. Thus, in the operating state shown inFIG.51, no significant influence is exerted on die member flow channels1260. In the state shown inFIG.52, however, slider elements1298are not aligned with die member flow channels1260. In this case, the positioning of slider elements1298causes a restriction of die member flow channels1260and reduces the free cross section of flow. The free cross-section of flow can be regulated by positioning the slider elements1298relative to die member flow channels1260. Another alternative embodiment of a die unit1328is shown inFIGS.53-56.FIG.53, firstly, shows a die unit1328with a die member1338. A die plate1340having die orifices1342is arranged on die member1338. An adjusting head1380is also arranged in the region of die plate1340. The structure of die unit1328can be seen in detail inFIG.54. Die unit1328has a die member1338in which die member flow channels1360are arranged. A guide cone1358is arranged on die member1338. An adjusting element1384is also arranged in the region of a longitudinal axis of die member1338. Adjusting element1384has an adjusting head1380on a first side. An adjusting disc1382is arranged on adjusting element1384. Adjusting disc1382has adjusting disc bores1386. The diameter of adjusting disc bores1346is approximately the same as the diameter of die member flow channels1360. Depending on their position, i.e., depending in particular on the angle of rotation of adjusting disc1382relative to die member flow channels1360, it is possible to vary the free cross-section of flow in the region of die member flow channels1360. If die member flow channels1360are aligned with adjusting disc bores1386, there is no significant restriction or limitation of fluid flow through die member flow channels1360. However, if adjusting disc1382is rotated by means of adjusting head1380from the position shown inFIG.54, in such a way that adjusting disc bore1386is no longer aligned with die member flow channels1360, flow in die member flow channels1360is restricted. This is illustrated inFIGS.55and56. In the state shown inFIG.55, adjusting disc bores1386are aligned with die member flow channels1360, so there is no or no significant restriction of fluid flow through die member flow channels1360. In the state shown inFIG.56, however, adjusting disc bores1386are no longer aligned with die member flow channels1360, so the cross-section of flow through die member flow channels1360is restricted in the region of adjusting disc1382. An alternative embodiment of a die assembly1428is shown inFIG.57. Die unit1428has an adjusting disc1482which is mounted movably relative to a die member1438. Adjusting disc1482has adjusting disc bores1490which can be positioned in alignment relative to die member flow channels1460so that there is effectively no restriction of fluid flow through die member flow channels1460, or, as shown inFIG.57, they can be brought into a non-aligned position relative to the flow channels so as to restrict the flow of fluid through die member flow channels1460. Adjusting disc1482has a threaded section1484for controlling adjusting disc1482. An adjusting element1486arranged in die member1438has a worm1488in the region of one of its ends. Worm1488matches threaded section1484in such a way that rotating the adjusting element1486having worm1488will cause adjusting disc1482to rotate. Adjusting element1486is guided in such a way that one of its ends can be actuated from outside die member1438. FIG.58shows a control block diagram1500for controlling a pressure regulating device1510. The arrangement has a pressure sensor1502, which is in signal communication with a controller1506. Depending on the pressure value measured by pressure sensor1502, controller1506actuates an actuator1508, which in turn actuates a pressure regulating device1510according to the pressure value measured by pressure sensor1502. By means of controller1506, the pressure in a plastic melt stream1504in the region of a die plate1540can be influenced in the desired manner and by means of the technical means mentioned and described in the embodiments. LIST OF REFERENCE SIGNS 2Pelletizing apparatus4Die assembly6Driver8Housing10Skid mount12Process water outlet14Pelletizer16Protective cover18Baseplate20Machine baseplate22Spacer elements24Process water inlet26Pressure regulating device28Die unit30Base member31Housing section32Fluid inlet side34Actuator36Actuating nut38,38a, Die member38b40Die plate42Die orifices44Sleeve46Flow channel48Fluid discharge side50Annular channel section52Regulating section52aConcave regulating section52bConvex regulating section54Centering pin56Cone fastening screw58Guide cone60Die member flow channels62Bolt64,66Fastening nuts68Flat washer70First abutment shoulder72Second abutment shoulder74Pins104Die assembly126Pressure regulating device130Base member146Flow channel150Annular channel section174Pins176Actuating element178Plunger180Actuating element (gear wheel)182Hand lever184Retaining ring186Regulating ring188Coupler190Cap ring204Die assembly226Pressure regulating device230Base member236Actuating nut/screw246Flow channel250Annular channel section274Pins292Mounting ring294Connecting ring296Extended pins304Die assembly328Die unit338Die member340Die plate356Cone fastening screw358Guide cone398Heating ring426Pressure regulating device428Die unit430Base member432Fluid inlet side438Die member440Die plate446Flow channel450Annular channel section454Centering pin456Cone fastening screw458Guide cone460Die member flow channels474Pins484Inlet/outlet for pressurized fluid486Inlet/outlet for pressurized fluid488First housing ring490Second housing ring492Bellows494Piston496Cylinder chamber528,528aDie unit530Base member538,538aDie member540Die plate550Annular channel section558,558aAxially adjustable guide cone560Die member flow channels580First pressure chamber582Second pressure chamber584Distributor section586Sealing ring588Inlet/outlet for pressurized fluid590Pressure chamber ring592Cone guide594Bellows596Trapezoidal section630Base member638Die member640Die plate650Annular channel section654Centering pin658Axially adjustable guide cone660Die member flow channels694Set screw receiver696Set screw698Nut730Base member738Die member740Die plate758Axially adjustable guide cone760Die member flow channels778Guide cone female thread780Male thread782Gear section784Receiving portion786Ball bearing788Lock ring796Adjusting pin798Rotating member830Base member838Die member840Die plate850Annular channel section858Axially adjustable guide cone860Die member flow channels880First pressure chamber882Second pressure chamber884Distributor section886Sealing ring888Inlet/outlet for pressurized fluid890Pressure chamber ring928Die unit938Die member940Die plate958Guide cone960Die member flow channels996Throttle pins1028Die unit1038Die member1040Die plate1050Annular channel section1058Guide cone1060Die member flow channels1082Slider chamber1084Slider bores1096Slider rod1098Slider element1126Pressure regulating device1128Die unit1130Base member1138Die member1140Die plate1146Flow channel1150Annular channel section1154Centering pin1156Cone fastening screw1158Guide cone1160Die member flow channels1194Pivot axis1196Adjusting screw1198Throttle element1228Die unit1238Die member1260Die member flow channels1284Slider bores1296Slider adjustment device1298Slider element1328Die unit1338Die member1340Die plate1342Die orifices1350Annular channel section1354Centering pin1358Guide cone1360Die member flow channels1380Adjusting head1382Adjusting disc1384Adjusting element1386Adjusting disc bore1428Die unit1438Die member1460Die member flow channels1482Adjusting disc1484Threaded section1486Adjusting element1488Worm1490Adjusting disc bore1500Control block diagram1502Pressure sensor1504Hot-melt adhesive flow1506Controller1508Actuator1510Pressure regulating device1540Die plate | 39,401 |
11858177 | EMBODIMENTS TO CARRY OUT THE INVENTION A surface material of a molding surface of a mold according to the present invention and a method for surface treatment for a molding surface of a mold to obtain the surface material are described below. Treatment Subject: Molding Surface of Mold The method for surface treatment of the present invention is a method of treatment having at least a molding surface of a mold as the treatment subject, and so the method for surface treatment of the present invention may be executed on just the molding surface, or may be executed on the entire mold including the molding surface thereof. There is no particular limitation to the application of the mold subjected to treatment, and as long as the mold is employed in an application in which the molding surface reaches 50° C. or hotter during molding, the subjected mold may be a mold employed in various applications, such as for molding a food product, for molding a thermoplastic resin or a thermoset resin, or for molding a rubber. However, particularly preferable application is made to a mold for molding resins in which the molding surface reaches a temperature in the vicinity of from 100° C. to 400° C. during molding due to contact with molten resin or the like, or due to heating of the mold itself. The substance of the mold subjected to treatment is not particularly limited as long as it contains a metal susceptible to corrosion. For example, molds of various steels generally employed for molds, such as stainless steels (SUS materials), carbon tool steels (SK materials), or alloy tool steels (SKS, SKD, SKT materials), may each be subjected to the treatment of the present invention. Moreover, molds of various substances may be subjected to treatment, such as molds made from other steel materials such as high speed tool steels (SKH materials), sintered metals such as cemented carbides, Cu—Be alloys, and molds made from other non-ferrous metal alloys. Moreover, the mold is not necessary formed entirely of a metal material, and may be a mold that includes other components, such as ceramics for example. Surface Treatment The surface treatment of the present invention as described below is performed on at least a surface of a molding surface of one of the molds described above. Preliminary Treatment Process The present process (preliminary treatment process) is a process performed as required, and as such is a process that is not necessarily always performed depending on the application etc. for the mold, and is not an essential process of the present invention. In the present process, a carbide powder is dry-ejected against a surface of a mold so as to prepare the surface by removing an electrical discharge hardened layer and softened layer arising on the surface of the mold due to electrical discharge processing or cutting processing during mold fabrication, or by removing directional processing marks (cutting marks, polishing marks, tool marks and the like) generated during machining, grinding, and polishing processes. In addition thereto, carbon element present within the carbide powder is caused to diffuse and penetrate into the surface of the mold, so as to perform carburizing at normal temperatures. Examples of carbide powders that may be employed include the powders of carbide or carbon containing substances such as B4C, SiC (SiC(α)), TiC, VC, graphite, diamond, and the like. SiC is preferably employed therefor, and SiC(α) is more preferably employed therefor. When employed either for the objective of removing an electrical discharge hardened layer or softened layer, or removing directional processing marks, so that the carbide powder employed exhibits a high cutting force, preferably an angular powder is employed therefor that has been obtained for example by crushing a sintered carbide based ceramic and then sieving. The shape of the carbide powder is not particularly limited in cases lacking such a cutting objective, and a carbide powder with a spherical shape or one with various other shapes may be employed. In order to obtain an ejection velocity required to achieve diffusion and penetration of carbon element, the size of the powder employed has a size of 220 grit (JIS R6001-1973) (from 44 μm to 105 μm) or finer, and preferably the powder employed has a size of so-called “fine particles” of 240 grit (JIS R6001-1973) (average of average diameter from 73.5 μm to 87.5 μm) or finer. Various known blasting apparatuses capable of dry-ejecting a powder may be employed as the method for ejecting such a carbide powder onto an article to be treated. An air blasting apparatus is preferably employed therefor due to the comparative ease with which the ejection velocity and the ejection pressure can be adjusted. A direct pressure blasting apparatus, suction gravity blasting apparatus, or various other types of blasting apparatus may be employed as such an air blasting apparatus. Any of these types of blasting apparatus may be employed, and the type thereof is not particularly limited as long as it has the performance capable of dry-ejecting at an ejection pressure of 0.2 MPa or above. When a carbide powder as described above is dry-ejected at high speed using such a blasting apparatus against a surface of a mold at portions of the surface of the mold that will contact with the molding material, electrical discharge hardened layers and softened layers, directional processing marks, and the like arising during mold fabrication from electrical discharge processing and cutting processing are removed so as to prepare a non-directional mold surface. Moreover, the impact of the carbide powder against the surface of the mold causes localized temperature rises on the surface of the mold at portions impacted by the carbide powder. The carbide powder is also heated and undergoes thermal decomposition. As the carbon element present within the carbide of the carbide powder diffuses and penetrates into the surface of the mold, the carbon content of these portions increases, enabling the hardness of the surface of the mold after performing the preliminary treatment process to be greatly increased. In the preliminary treatment of the present invention, the carbide powder undergoes decomposition through thermal decomposition due to the temperature of the carbide powder rising when the carbide powder is caused to impact an article to be treated by the blast processing. The carburizing treatment is accordingly performed by thus generated carbon element present within the carbide powder accordingly diffusing and penetrating into the article to be treated. According to the preliminary treatment of this method, the diffusion and penetration of carbon element into the mold is most significant at the greatest proximity to the surface, with this also resulting in a great increase the carbon content. The carbon content increases due to diffusion toward the inside of the article to be treated. This results in the generation of a tilting structure in which the carbon content gradually decreases with depth from the surface of the article to be treated, with the carbon content decreased to that of an untreated state by a certain depth. The carbide powder and the article to be treated undergo a partial rise in temperature when the carbide powder impacts the article to be treated. However, the rise in temperature is only localized and instantaneous. Distortion, phase transformation, or the like in the article to be treated, such as that caused by heat treatment in an ordinary carburizing treatment performed by heating the entire mold in a carburizing furnace, is accordingly not liable to occur. Moreover, higher adhesion strength is achieved due to the generation of fine carbides, and an irregular carburized layer is not generated. Instantaneous Heat Treatment Process The present process (instantaneous heat treatment process) is performed on at least a molding surface of a mold subject to treatment (a molding surface of a mold after the preliminary treatment process in cases in which the preliminary treatment process described above has been performed). The present process is performed to achieve a surface profile that improves the demoldability by dry-ejecting a spherical powder against the surface of the mold so as to form innumerable fine depressions having a circular arc shape on the surface of the mold, and so as to further increase the surface hardness by micronization of structure in the vicinity of the surface of the molding surface. There are no particular limitations to the substance of the spherical powder employed therefor, as long as the spherical powder has a hardness equal to or more than the hardness of the mold to be treated. For example, as well as spherical powders made from various metals, a spherical powder made from a ceramic may be employed, and a spherical powder made from a similar substance to the powders of carbon or carbon containing substances described above may also be employed therefor. The spherical powder employed is spherical to an extent that enables innumerable fine indentations having a circular arc shape as described above to be formed on the surface of the mold. Note that “spherical shaped” in the present invention need not refer strictly to a “sphere”, and also encompasses non-angular shapes close to that of a sphere. Such spherical powders can be obtained by atomizing methods when the substance of the powder is a metal, and can be obtained by crushing and then melting when the substance of the powder is a ceramic. In order to achieve the ejection velocity needed to plastically deform the surface of the mold by impact to form semi-circular indentations (dimples), the particle diameter of the powder employed therefor has a size of 220 grit (JIS R6001-1987) (from 44 μm to 105 μm) or finer, and preferably “fine particles” having a size of 240 grit (JIS R6001-1973) (average of average diameter from 73.5 μm to 87.5 μm) or finer are employed therefor. Moreover, various known blasting apparatuses with dry-ejection capabilities, similar to those explained with respect to the ejection method for carbide powder when explaining the preliminary treatment process, may be employed as the method for ejecting the spherical powder onto the surface of the mold in such a manner. The type and the like of the blasting apparatus is not particularly limited, as long as it has the performance capable of ejecting at an ejection pressure of at least 0.2 MPa. The spherical powder such as described above is ejected against the surface for molding of the mold, and the impact of the spherical powder results in plastic deformation occurring on the surface the mold at the portion impacted by the spherical powder. As a result, even in cases in which the preliminary treatment process has been performed by employing the angular carbide powder, and even in cases in which indentations and protrusions having acute apexes were formed on the surface of the mold in the cutting achieved by the impact of such a carbide powder, the surface roughness is improved by collapsing the acute apexes, and by randomly forming innumerable smooth depressions (dimples) with circular arc shapes on the entire surface of the mold. Moreover, due to forming the dimples, a surface with improved demoldability is formed due to the incorporation of air and release agent into the dimples during molding reducing the contact area between the molding material and the molding surface. Moreover, due to the heat generated when impacted by the spherical powder, the impacted portions experience instantaneous local heating and cooling. Accompanying the instantaneous heat treatment, fine crystals are also formed at the surface of the mold and the surface of the mold undergoes work hardening due to plastic deformation when the circular arc shape depressions are formed. The surface hardness of the mold is thereby further increased from that of the state after the preliminary treatment process. Moreover, due to a compressive residual stress being imparted by the plastic deformation of the surface, this is also thought at the same time to contribute to an increase in the fatigue strength and the like of the mold, in an effect obtained by so-called “shot peening”. Titanium Powder Ejection A powder of titanium or titanium alloy (hereafter also referred to collectively as a “titanium powder”) is also ejected against at least the molding surface after being subjected to the instantaneous heat treatment as described above. A titanium oxide coating film is thereby formed on the surface for molding of the mold. Such a titanium powder is not particularly limited in shape as long as the titanium powder has a size of 100 grit (JIS R6001-1973) (from 74 μm to 210 μm) or finer, and the titanium powder employed may be spherical, angular, or various other shapes. Moreover, a powder of a precious metal (such as Au, Ag, Pt, Pd, or Ru) having an effect of promoting the catalytic function of the titanium oxide may be mixed in with the titanium powder at a range of from about 0.1% to about 10% mass ratio, and ejected therewith. Note that in the following description, the term titanium powder is employed as a collective term that encompasses titanium powders incorporating a precious metal, unless explanation particularly differentiates between a precious metal powder and a titanium powder. In cases in which a titanium powder mixed with a precious metal powder is ejected, the particle diameters of both powders are not necessarily always the same diameter, and a titanium powder and a precious metal powder having different particle diameters may be employed. In particular, the specific weight of precious metal powders is greater than that of titanium powders, and the particle diameter of the precious metal powder may be made smaller than that of the titanium powder so as to bring the masses of each particle of the two powders closer together, and to adjust such that the ejection velocities of both powders are substantially the same as each other. Moreover, various known blasting apparatuses with dry-ejection capabilities, similar to those explained with respect to the ejection method for carbide powder or spherical shot when explaining the preliminary treatment process or the instantaneous heat treatment process, may be employed as the method for ejecting the titanium powder described above onto the surface of the mold. The type and the like of the blasting apparatus is not particularly limited, as long as it has the performance capable of ejecting at an ejection pressure of at least 0.2 MPa. Ejecting the titanium powder as described above to cause the titanium powder to impact against the molding surface including the surface finely crystalized by the instantaneous heat treatment process results in the velocity of the titanium powder changing between before and after impact, and in energy of an amount equivalent to the deceleration in velocity becoming thermal energy that locally heats the impacted portions. The titanium powder configuring the ejection powder is heated at the surface of the substrate by this thermal energy, and the titanium is activated and adsorbed to the substrate surface and diffuses and penetrates therein. When this occurs, the surface of the titanium reacts with oxygen present in compressed gas or oxygen present in the atmosphere, and is oxidized thereby so as to form a titanium oxide (TiO2) coating film corresponding to the blend amounts in the ejection powder. The film thickness of the titanium oxide coating film is about 0.5 μm, and is activated and adsorbed to the micronized surface structure formed on the molding surface by the instantaneous heat treatment. The titanium (titanium and precious metal in cases containing a precious metal powder) diffuses and penetrates inward from the substrate surface to a depth of about 5 μm. Note that the titanium oxide coating film formed in this manner is oxidized by reaction with oxygen in compressed gas or the atmosphere due to heat generated during impact. This means that a tilting structure is generated in which there is a lot of bonding with oxygen in the vicinity of the surface where the temperature is highest, and the amount of bonding with oxygen gradually decreases on progression further inward from the surface. EXAMPLES The following Test Examples 1 to 4 illustrate examples in which the method for surface treatment of the present invention is applied to various molds, and the Test Example 5 illustrates a result when an evaluation test for corrosion resistance is performed on a test strip that has had the surface treatment of the present invention performed thereon. Test Example 1: Pudding Mold (1) Treatment Conditions A mold (Example 1) was produced by performing instantaneous heat treatment and titanium powder ejection under the conditions listed in Table 1 below to all faces, including a molding surface of a mold made from stainless steel (SUS 304) employed for molding puddings (a food product). The instantaneous heat treatment alone was performed to produce another mold (Comparative Example 1). TABLE 1Pudding Mold (SUS 304) Treatment ConditionsPuding MoldSUS 304 (380 HV) (φ50 mm x height 30 mm x thickness 1 mm)Product to be MoldedPudding (food product)Instantaneous Heat TreatmentTitanium Powder EjectionBlasting ApparatusGravity Type (SGF-4A: made byZuji Manufacturing Co. Ltd)EjectionSubstancealumina-silica beadspure titaniumMaterial(hard beads FHB)(TIROP-150: made by SumitomoSitix Corporation)Grain Size400 grit100 grit or finer(from 38 μm to 53 μm diameter)(from 45 μm to 150 μm diameter)Ejection Pressure0.4 MPa0.5 MPaNozzle Diameterφ 9 mm longφ 9 mm longEjection Distance200 mm150 mmEjection TimeAll faces: 30 seconds ×All faces: 30 seconds ×6 directions6 directions (2) Test Method and Test Results Puddings were consecutively manufactured while respectively employing the mold of Example 1 (instantaneous heat treatment+titanium powder ejection) and the mold of Comparative Example 1 (instantaneous heat treatment alone). A so-called “baked pudding” was manufactured as the pudding by putting a mold containing a pudding liquid into an oven and heating. After the pudding liquid filling the mold had been caused to solidify in the mold by the heat from the oven so as to mold the pudding, a subsequent operation was performed to remove the finished pudding from the mold. The “lifespan” was evaluated as the point when the mold was replaced accompanying a deterioration in demoldability, and dirt on the molding surface and demoldability were evaluated. The results thereof are listed in Table 2. Note that the temperature (maximum value) at the molding surface during pudding manufacture (during molding) rises to 180° C., this being the temperature of the oven. TABLE 2Pudding Mold Test ResultsSurfaceRoughnessLifespanDirt/Demoldability(Ra)Comparative0.3 μm10,000 hoursBecame graduallymore dirty andExample 1demoldabilitygradually deterioratedExample 10.2 μm20,000 hoursDirt did not adhereand demoldabilitywas also good (3) Interpretation Etc. An untreated pudding mold (polished by buffing after press molding) had poor demoldability and needed to be replaced at 5,000 hours use. In comparison to the untreated mold, it was confirmed that not only the mold of Example 1, but also of Comparative Example 1, achieved a greatly extended lifespan, not being susceptible to dirt adhering and exhibiting good demoldability. Moreover, whereas the untreated mold had a hardness of 380 HV and a residual stress of −190 MPa, the mold of the Comparative Example 1 subjected to instantaneous heat treatment as described above exhibited an improvement in surface hardness to 580 HV and an improvement in residual stress to −1080 MPa. A great reduction achieved in pitting corrosion generation was confirmed by a test according to the method of ferric chloride corrosion tests for stainless steels (JIS G0578:2000). However, in the mold of Comparative Example 1 subjected to instantaneous heat treatment alone, there was noticeable adherence of dirt thereto and demoldability deteriorated after 10,000 hours of use, such that replacement was required. In contrast thereto, with the mold of Example 1 subjected to both the instantaneous heat treatment and titanium powder ejection, neither the adherence of dirt nor a reduction in demoldability was seen even after exceeding 10,000 hours of use, enabling the lifespan to be extended to 20,000 hours of use. The above results mean that one could say that the above advantageous effects are obtained in the mold of Example 1 by forming the titanium oxide coating film on the surface by titanium powder ejection. The titanium oxide coating film in the present invention at the surface material for a molding surface of a mold and formed by the treatment method thereof, is accordingly thought to be the entity that decomposed dirt in a state filled with the pudding liquid, and therefor decomposed dirt even in a state in which no light was being irradiated thereon. The titanium oxide coating film is also thought to be the entity that exhibited a photocatalyst-like function of preventing dirt from adhering due to hydrophilic properties being exhibited. Although the reason that titanium oxide exhibited a photocatalyst-like function even in an environment not irradiated with light in this manner is not completely clear, industrially manufactured titanium oxide loses oxygen when heated to a high temperature, and changes from a white color to a black color. The material that has turned such a black color exhibits the properties of a semiconductor. Namely, semiconductor-like properties are exhibited when in a state in which there is a deficit of oxygen bonding. The titanium oxide coating film formed on the surface of a mold in the present invention, as stated above, has a tilting structure in which the amount of bonding to oxygen is greatest in the vicinity of the surface of the mold, and the amount of bonding to oxygen gradually decreases on progression inward from the surface. The titanium oxide present inside accordingly has a deficit of bonding to oxygen, and this is thought to be the reason why semiconductor-like properties are exhibited thereby. Thus by being employed under heating, charge migration is thought to occur due to thermal excitation, so as to have a catalyst-like (referred to as a “semiconductor catalyst-like” in the specification of the present invention) function triggering a charge-migration type of oxidation-reduction effect. Generally a semiconductor catalyst needs to be a catalyst having a special structure, such as being doped with an electron donor element or with an electron acceptor element. Obtaining the advantageous effect of exhibiting a catalytic action with heat by using the titanium oxide coating film obtained by the comparatively simple method of titanium powder ejection is an advantageous effect that greatly exceeds expectations. Note that a catalyst-like action is exhibited even in an environment not irradiated with light as described above in a pudding mold with the surface material for a molding surface of a mold or treated with the treatment method thereof in the present invention, and a catalyst-like function is also exhibited when employed under heating at 50° C., as illustrated by “Test Example 5” described below. Similar advantageous effects, such as preventing dirt from adhering, improving demoldability, and extending lifespan, are accordingly thought to be obtained even in cases in which, instead of manufacturing a baked pudding as described above, a mold that has a surface treated with the method of the present invention is employed to manufacture gelatin puddings by taking a pudding liquid with added gelatin at about 50° C. to 60° C. and cooling and solidifying the pudding liquid inside the mold. Test Example 2 TPU Molding Mold (1) Treatment Conditions A molding surface of a mold made from prehardened steel for use in molding a thermoplastic polyurethane elastomer (TPU) is subjected to preliminary treatment, instantaneous heat treatment, and titanium powder ejection under conditions as listed in Table 3 below to produce a mold (Example 2). A mold (Comparative Example 2) was also produced by performing only the preliminary treatment and the instantaneous heat treatment thereon. TABLE 3Treatment Conditions for Thermoplastic Polyurethane Elastomer Molding MoldMoldPrehardened Steel (NAK 55 made by Daido Steel Co. Ltd: 400 HV)Product to be Molded(500 mm × 500 mm × 20 mm)Thermoplastic Polyurethane ElastomerPreliminary TreatmentInstantaneous HeatTitanium PowderTreatmentEjectionBlasting Apparatusgravity type (SGF-4A: made by Fuji Manufacturing Co. Ltd)EjectionSubstanceSiCHSSpure titaniumMaterial(TIROP-150 made bySumitomo SitixCorporation)Grain220 grit300 grit100 grit or finerSize(from 44 μm to 105 μm(from 37 μm to 74(from 45 μm to 150diameter)μm diameter)μm diameter)Ejection Pressure0.3 MPa0.4 MPa0.5 MPaNozzle Diameterφ 9 mmφ 9 mm longφ 9 mm longEjection Distance100 mm to 150 mm100 mm to 150 mm100 mm to 150 mmEjection TimeAbout 5 minutesAbout 5 minutesAbout 10 minutes (2) Test Method and Test Results The mold of Example 2 (preliminary treatment+instantaneous heat treatment+titanium powder ejection) and the mold of Comparative Example 2 (preliminary treatment and instantaneous heat treatment alone) were each employed for molding a thermoplastic polyurethane elastomer. When molding, successive operations were performed of filling a mold that had been heated to 50° C. with a thermoplastic polyurethane elastomer that had been heated to 220° C., molding, and taking the resin out from the mold after molding. The time when the mold was replaced due to an accompanying deterioration in demoldability was evaluated as the “lifespan” thereof, and dirt on the molding surface and demoldability were also evaluated. The results thereof are listed in Table 4. TABLE 4Resin Mold Test ResultsSurfaceRoughness (Ra)LifespanDirt/DemoldabilityComparative0.3 μm400,000 shotsLittle dirt adheringExample 2Slight problems of demoldingExample 20.3 μm700,000 shotsNo dirt adheringNo problems of demolding (3) Interpretation Etc. In the mold of Comparative Example 2 subjected to the preliminary treatment and the instantaneous heat treatment, there was little dirt adhering and slight defective molding. However, in the mold of Example 2 that had been further subjected to the titanium powder ejection in addition to the preliminary treatment and the instantaneous heat treatment, there was no dirt adhering nor problems demolding at all. As a result, the lifespan of the mold of Example 2 was dramatically increased compared to the mold of Comparative Example 2. The above results mean that one could say that the above advantageous effects are obtained in the mold of Example 2 by the titanium oxide coating film formed on the surface by titanium powder ejection. It is thought that the titanium oxide coating film in the present invention of the surface material for a mold and formed by the treatment method thereof was accordingly the entity that decomposed dirt on the molding surface employed a state in which no light was being irradiated thereon, and the entity that exhibited a photocatalyst-like function or semiconductor catalyst-like function of preventing dirt from adhering due to hydrophilic properties being exhibited. Test Example 3: Glass Fiber Reinforced PPS Molding Mold (1) Treatment Conditions A molding surface of a mold manufactured from prehardened steel for use in molding polyphenylene sulfide (PPS) containing glass fibers at a 40% mass ratio was subjected to preliminary treatment, instantaneous heat treatment, and titanium powder ejection under conditions as listed in Table 5 below to produce a mold (Example 3). A mold (Comparative Example 3) was also produced by performing the preliminary treatment and the instantaneous heat treatment alone. TABLE 5Treatment Conditions for Glass Fiber Reinforced PPS Molding MoldMoldPrehardened SteelProduct to be Molded(STAVAX made by Bohler Uddeholm Co., Ltd: 560 HV)(250 mm × 250 mm × 50 mm)PPS (containing 40% glass fiber)PreliminaryInstantaneous HeatTitanium PowderTreatmentTreatmentEjectionBlasting ApparatusFine powder type (SGF-4A: made by FujiDirect pressure typeManufacturing Co. Ltd)(FD-4: made by FujiManufacturing Co.Ltd)EjectionSubstanceSiCHSSpure titaniumMaterial(TIROP-150 made bySumitomo SitixCorporation)Grain400 grit400 grit100 grit or finerSize(average of average(diameter from 30(from 45 μm to 150diameter of from 37μm to 53 μmμm diameter)μm to 44 μm)diameter)Ejection Pressure0.3 MPa0.5 MPa0.4 MPaNozzle Diameterφ 9 mmφ 9 mm longφ 5 mm longEjection Distance100 mm to 150 mm100 mm to 150 mm150 mm to 200 mmEjection TimeAbout 4 minutesAbout 4 minutesAbout 6 minutes (2) Test Method and Test Results The mold of Example 3 (preliminary treatment+instantaneous heat treatment+titanium powder ejection) and the mold of Comparative Example 3 (preliminary treatment and instantaneous heat treatment alone) were each employed for molding PPS containing glass fibers at 40% mass ratio. When molding, successive operations were performed of filling a mold that had been heated to 150° C. with PPS heated to 300° C., molding, and taking the resin out from the mold after molding. The time when the mold was replaced due to an accompanying deterioration in demoldability was evaluated as the “lifespan” thereof, and dirt on the molding surface and demoldability were also evaluated. The results thereof are listed in Table 6. TABLE 6Test Results for PPS Molding MoldSurfaceRoughness (Ra)LifespanDirt/DemoldabilityComparative0.3 μm1,500,000 shotsCorrosion occurredExample 3and dirt adheredExample 30.2 μm3,000,000 shotsNo corrosion occurredGood demoldability (3) Interpretation Etc. In the mold employed for molding PPS resin reinforced with glass fiber, the molding surface is readily scratched through the high-hardness glass fibers contacting the molding surface. The molding surface is also readily corroded due to the PPS also generating a corrosive gas (acidic gas) containing sulfur and chlorine when the polymer of the PPS itself, or an oligomer component thereof, decomposes at high temperature. In the mold of Comparative Example 3 subjected to the preliminary treatment and the instantaneous heat treatment, a dramatic reduction in the generation of corrosion also be achieved compared to an untreated mold, and a dramatic reduction in the adherence of dirt could also be achieved therein. However, in the mold of Example 3 further subjected to the titanium powder ejection in addition to the preliminary treatment and the instantaneous heat treatment, corrosion no longer occurred and the mold showed good demoldability, resulting in no dirt adhering nor any demolding problems. The lifespan of the mold of Example 3 was accordingly improved by a factor of two compared to the mold of Comparative Example 3. The above results mean that one could say that the above advantageous effects can be obtained in the mold of Example 3 by forming the titanium oxide coating film on the surface by the titanium powder ejection. The surface material of a molding surface of a mold of the present invention and the titanium oxide coating film formed by the treatment method thereof is thought to exhibit a photocatalyst-like function of preventing the generation of corrosion on the molding surface not irradiated with light, decomposing dirt, and preventing dirt from adhering due to hydrophilic properties being exhibited. Test Example 4: Mold for Rubber (1) Treatment Conditions A mold (Example 4) was produced by subjecting a molding surface of a prehardened steel mold employed for molding rubber to preliminary treatment, instantaneous heat treatment, and titanium powder ejection under conditions as listed in Table 7 below, and a mold (Comparative Example 4) was also produced by performing the preliminary treatment and the instantaneous heat treatment alone. TABLE 7Rubber Mold Treatment ConditionsMoldPrehardened Steel (NAK 55 made by Daido Steel Co. Ltd: 400 HV)Product to be Molded(450 mm × 450 mm × 20 mm)RubberPreliminaryInstantaneous HeatTitanium PowderTreatmentTreatmentEjectionBlasting ApparatusGravity type (SGF-4A: made by FujiDirect Pressure typeManufacturing Co. Ltd)(FD-4: made by FujiManufacturing Co. Ltd)EjectionSubstanceSiCHSSpure titanium(TIROP-150 made bySumitomo SitixCorporation)MaterialGrain220 grit300 grit100 grit or finerSize(from 44 μm to 105(from 37 μm to 74(from 45 μm to 150 μmμm diameter)μm diameter)diameter)Ejection Pressure0.3 MPa0.5 MPa0.4 MPaNozzle Diameterφ 9 mmφ 9 mm longφ 5 mm longEjection Distance100 mm to 150 mm100 mm to 150 mm150 mm to 200 mmEjection TimeAbout 5 minutesAbout 5 minutesAbout 8 minutes (2) Test Method and Test Results The mold of Example 4 (preliminary treatment+instantaneous heat treatment+titanium powder ejection) and the mold of Comparative Example 4 (preliminary treatment and instantaneous heat treatment alone) were each employed for molding a rubber. When molding the rubber, repeated operations were performed of filling a mold that had been heated to 150° C. with a vulcanized rubber, then closing the mold and pressing the rubber to harden (direct compression molding) and taking the molded article out from the mold after hardening. The time when the mold was replaced due to an accompanying deterioration in demoldability was evaluated as the “lifespan” thereof, and dirt on the molding surface and demoldability were also evaluated. The results thereof are listed in Table 8. TABLE 8Mold for Rubber Test ResultsSurfaceRoughness (Ra)LifespanDirt/DemoldabilityComparative0.4 μm750,000 shotsSome dirt adheringExample 4Example 40.3 μm1,000,000 shotsNo dirt adheringNo problems demolding In the mold of Comparative Example 4 subjected to the preliminary treatment and the instantaneous heat treatment alone there was also a great reduction in dirt adhering and demolding problems compared to an untreated mold. However, a further reduction dirt adhering could be achieved in the mold of Example 4 that had been further subjected to the titanium powder ejection in addition to the preliminary treatment and the instantaneous heat treatment. In a mold for rubber, great effort and expense is incurred in operations to clean the mold after usage. However, in the mold of the present invention, a significant reduction could be achieved in the effort incurred for cleaning operations after use, dirt did not adhere even after being used for 1,000,000 shots, and the lifespan of the mold could be greatly extended. The above results mean that, due to the mold of Example 4 exhibiting excellent antifouling properties compared to the mold of Comparative Example 4, one could say that the above advantageous effects were obtained in the mold of Example 4 due to forming the titanium oxide coating film on the surface by the titanium powder ejection. The titanium oxide coating film formed in the present invention on the surface material for a molding surface of a mold is thought to exhibit a photocatalyst-like or semiconductor catalyst-like function, which is to decompose dirt even on the molding surface for rubber employed in a state not irradiated with light and to prevent dirt from adhering due to hydrophilic properties being exhibited. Test Example 5: Corrosion Resistance Test (1) Test Objective The test objective was to confirm that a mold according to the present invention, and a steel surface that had been subjected to the surface treatment with the treatment method thereof, would exhibit a corrosion inhibiting effect in an environment not irradiated with light. (2) Test Method SUS 304 was welded (TIG welded) and imparted with a tensile residual stress to produce a test strip susceptible to stress corrosion cracking. A CASS test according to JIS H 8502:1999 “7.3 CASS Test Method” was then performed on a welded test strip that was otherwise untreated, and on a welded test strip of the mold and molding method for surface treatment according to the present invention (instantaneous heat treatment+titanium powder ejection). The CASS test performed here differs from a salt spray test performed by simply spraying salt water, and is a corrosion resistance test performed by spraying a brine adjusted to an acidity of from pH 3.0 to pH 3.2 by the addition of copper II chloride and acetic acid. This means that the CASS test is a test of corrosion resistance performed in an extremely hash corrosion environment. Note that the test conditions of the CASS test are as listed in the following Table 9. TABLE 9CASS Test ConditionsItemWhen AdjustedDuring TestSodium chloride concentration in g/L50 ± 550 ± 5Copper II chloride (CuCl2•H2O)0.26 ± 0.02concentration in g/LpH3.03.0 to 3.2Spray rate in ml/ 80 cm2/h—1.5 ± 0.5Temperature inside test chamber—50 ± 2in ° C.Temperature of brine tank in ° C.—50 ± 2Temperature of saturated air vessel—63 ± 2in ° C.Compressed air pressure in kPa—from 70 to 167 (3) Test Result and Interpretation The state of test strips after the CASS test are illustrated inFIG.1(untreated) andFIG.2(Example). As illustrated inFIG.1, the generation of rust was observed on the surface of the untreated test strip. In contrast thereto, on the test strip that had been subjected to surface treatment for a mold according to the present invention and method thereof, no rust generation was observed and the clean state present prior to the CASS test was maintained, as illustrated inFIG.2, confirming that the test strip of the mold according to the present invention enabled extremely high corrosion resistance to be obtained. In shot peening, tensile residual stress that has been generated in a test strip by welding is released, and a compressive residual stress is imparted thereto. This is accordingly known to have an advantageous effect of inhibiting stress corrosion cracking, however is not known to directly inhibit corrosion (rust) from occurring. In the test strip of the surface material of the molding surface of the present invention, against rust generation the titanium oxide coating film formed on the surface thereof by titanium powder ejection is accordingly thought to be the reason a photocatalyst-like or a semiconductor catalyst-like function (reduction function) is exhibited. Note that a CASS test is a test performed using a lidded test chamber in order to maintain the environment inside the test chamber in a constant state, and light is accordingly not irradiated onto the test strip during testing. However, the CASS test is performed by testing in a state in which the temperature inside the test chamber is 50° C.±2° C., and so the temperature of the test strip is also warmed to 50° C.±2° C. The titanium oxide coating film is thought to exhibit the photocatalyst-like or semiconductor catalyst-like function due to testing being performed in such a warmed state. Note that the surface roughness Ra was 0.3 μm at a smooth portion in the vicinity of the weld on the test strip of the Comparative Example that had been subjected to instantaneous heat treatment by the ejection of 400 grit (diameter from 30 μm to 53 μm) shot made from HSS ejected at an ejection pressure of 0.5 MPa thereon, and the surface hardness was improved to 580 HV from an untreated state of 300 HV. The surface roughness Ra was improved to 0.2 μm at a smooth portion in the vicinity of the weld on the test strip of the present invention Example that was a test strip subjected to instantaneous heat treatment under the above conditions, and then further subjected to ejection of titanium powder of particle diameter from 45 μm to 150 μm ejected at an ejection pressure of 0.4 MPa. The surface hardness after treatment was also maintained without change at 580 HV. The hardness of titanium is about 300 HV, however the hardness of titanium oxide (TiO2), an oxide of titanium, reaches a hardness of 1000 HV. Thus the surface hardness of the titanium powder used for ejection is accordingly a hardness of about 1000 HV and higher than the 580 HV surface hardness of the test strip after the instantaneous heat treatment from forming an oxide coating film. Thus in the method for surface treatment of the present invention, the titanium powder ejection against the surface after instantaneous heat treatment is thought to smooth by pressing and collapsing protrusion tips of surface indentations and protrusions formed by the impact of shot during the instantaneous heat treatment, so that burnishing is performed. Namely, not only are there depressions (dimples) formed by the impact of shot on the surface of the test strip after instantaneous heat treatment, but a state is achieved in which acute protrusions are also formed between one and another of the formed depressions. In contrast thereto, by further performing the titanium powder ejection against the surface after instantaneous heat treatment, smoothing (burnishing) is achieved by pressing and collapsing the protrusions of the indentations and protrusions that had been formed on the surface. The surface achieved thereby, which lacks pointed protrusions and has been deformed into a smoothed profile with depressions alone, is thought to be why the numerical value of the surface roughness Ra is reduced. Thus with the surface material of the molding surface according to the present invention, apex portions of pointed protrusions, which would resist removal when removing the molded article from the mold, are pressed and collapsed so as to smooth the surface material while leaving the depressions (dimples) that were generated by the instantaneous heat treatment, into which a release agent and air etc. can be introduced to reduce the contact area between the surface of the molded article and the surface of the mold. The surface material accordingly not only exhibits improved demoldability that accompanies the antifouling and anticorrosion due to the photocatalyst-like or semiconductor catalyst-like effect of the titanium oxide, but after processing the surface itself has an improved and superior structure with improved demoldability. Summary of Test Results The Test Examples 1 to 5 as described above had the titanium oxide coating film formed on the molding surface of the present invention (the surface of the test strip in the Test Example 5). In each case the results obtained indicated that, irrespective of the tests being performed in state in which light is not irradiated thereon, the titanium oxide coating film formed on the surface exhibited a photocatalyst-like function. However, in each of the Test Examples 1 to 5 described above, the tests were performed in a state in which the molding surface (the surface of the test strip in Test Example 5) had been heated or warmed to a temperature of 50° C.±2° C. or hotter. Due to there being no other energy present to excite a photocatalyst function of the titanium oxide coating film, the logical postulation is that the advantageous effects described above of improved corrosion resistance and antifouling etc. were induced by the heat imparted to the portions formed with the titanium oxide coating film. Thus in Test Examples 1 to 5, realization of a photocatalyst-like or semiconductor catalyst-like function was confirmed (see Test Example 5) in a state in which a sample has been warmed to at least 50° C. (±2° C.). Thus application of the surface treatment of the present invention to at least a molding surface of a mold having a molding surface that reaches 50° C. or hotter during molding, enables the following advantageous effects to be obtained at the same time: improved wear resistance accompanying raising the hardness of the molding surface; improved corrosion resistance; and improved demoldability. | 44,012 |
11858178 | DETAILED DESCRIPTION OF PARTICULAR EMBODIMENTS To facilitate comprehension, the same reference numerals will be used to denote the identical features in the two embodiments, adding the symbol “′” for the second embodiment. As will be described in detail hereinafter, the concept of the invention is based on the use of a modular injection mold comprising at least one movable intermediate plate. This design permits the injection mold to be modified in a flexible manner simply by changing one or more intermediate plates without having to manufacture the entire mold for each change. For this reason, the invention permits the versatility of an injection mold in a reliable and robust manner. More specifically, the invention is based on an injection mold comprising a mold core which comprises a first part and a second part which are mobile relative to one another between a first injection position in which said two parts are moved together to form an injection cavity, at least one intermediate plate which is arranged between said two parts contributing thereto, permitting the injection of a material to form an injected blank or a finished part, and a second demolding position permitting the injected part or the finished part to be removed. Such an injected blank comprises at least one component and preferably a plurality of components which are connected together by a sprue tree and/or a support which thus permits the simultaneous injection-molding of a plurality of components. In this case, the components which are injected simultaneously and the support thereof are then separated from the sprue tree by a cutting step. The injection cavity of the injection mold thus makes it possible to manufacture one or more components simultaneously. These potentially multiple components may be identical or different. The two mobile parts of the core of the injection mold may also adopt a second demolding position in which said two parts are moved away from one another to permit the demolding of an injected blank. To achieve this, the injection mold comprises at least one ejector designed to contribute to the demolding of an injected blank (or component). According to the invention, the injection mold also comprises at least one intermediate plate, which is separate from said two parts of the mold core and movable, and which is arranged between said two parts of the mold core, comprising at least one first cutout forming a part of the injection cavity of the injection mold. FIG.1thus shows a first part1of an injection mold core, according to a first embodiment, cooperating with an intermediate plate10shown inFIG.6. This intermediate plate comprises a cutout12. The subassembly formed by the first part1and the intermediate plate10defines a cavity which permits the formation of an injected part50during an injection phase. It should be noted that, in such a phase, a second part, not shown, of the core of the injection mold cooperates with this subassembly1,10to form a closed cavity. The second part is moved closer to or away from the first part in a direction of translation D, called the injection direction or the closing direction of the mold. The material is injected from the injection screw into the injection cavity in this same direction D. Advantageously, the two parts of the core of the injection mold and the intermediate plate have substantially planar surfaces which are perpendicular to the injection direction. Moreover, the different ejectors2,3are mobile in this same injection direction D, as described in more detail below. The first part1also comprises a cutout5which is superposed on the cutout12of the intermediate plate10in the injection direction D. A first component ejector2is arranged in the cutout5of the first part1. In the injection configuration, this ejector2is positioned such that the end thereof is positioned in the extension of the surface of the first part1. This end thus forms part of the surface of the injection cavity. In other words, it contributes to defining the shape of a component. In addition, a second ejector3is arranged in the region of the cutout5of the first part1of the injection mold in the vicinity of the first ejector2. In the injection phase, this second ejector3also forms a surface of the injection cavity. This surface is retracted relative to the distal surface of the first ejector2. It permits a part of greater thickness of the injected part50to be defined, as will be described in more detail below. The cutout12of the intermediate plate10is a through-opening. It is also positioned opposite the cutout5of the first part such that the ejectors2,3are positioned according to a shape corresponding exactly to that of the cutout12of the intermediate plate. The cutout12of the intermediate plate contributes to the formation of the injection cavity of the injection mold, in particular a part of the injection cavity precisely defining a component to be manufactured. More particularly, the flanks of the cutout12of this intermediate plate10define the contour of the injected component. The cutout12also permits the displacement of at least one ejector2,3which is principally guided by the first part1of the core of the injection mold, permitting the ejection of a component after an injection phase. Due to this design of the injection mold, the injection cavity makes it possible to form an injected part50comprising a support56which connects the separate components together and does not form part of a component to be manufactured. A component is formed by a first portion52of the injected part50which is superposed on the first ejector2, and by a second portion53of the injected part50which is superposed on the second ejector3. It is thus apparent that the injection cavity of the injection mold which is designed to form an injected part is principally delimited in the first injection position of the injection mold by:the first cutout12of the intermediate plate10,an end of a first ejector2, the end of said first ejector2being located flush with one of the faces of the intermediate plate10, and an end of a second ejector3, these two ejectors being arranged and guided by the first part1of the mold core of the mold, and bythe second part of the mold core, not shown, which defines the rear face of the injected part50, for example a planar face. More specifically, the section of the cutout12of the intermediate plate10forms the surface of the cavity of the injection mold which defines the periphery of the timepiece component, the lateral surface62of the timepiece component, apart from the tenon. It should be noted that, in the illustrated example, the injection mold is used to manufacture a timepiece component, more specifically a winding pawl60, particularly shown inFIGS.8to10. The second above-mentioned portion53forms a tenon63of the winding pawl60and the first portion52forms the lateral part62of the winding pawl60, in particular the beaks65. Naturally the invention is not limited to the specific shape of the above-described injection cavity. More specifically, the geometry of the intermediate plate10and the two mobile parts of the injection mold could be suitable for forming any other desired component. Similarly, the injection mold could comprise a different number and/or different shapes of component ejectors. Moreover, the intermediate plate10of the injection mold may comprise, apart from the described cutout, at least one blind cutout and/or at least one texturization of its surface to form at least one part of the injection cavity having a different geometry. FIG.2shows in more detail a perspective view of the subassembly of the injection mold, in which it appears that the injection mold has a generally cylindrical shape, forming an annular portion46on its periphery, a plurality of cavities being arranged therein in the region of the above-mentioned multiple pairs of ejectors2,3, said cavities being designed to form a plurality of components, such as for example a plurality of identical winding pawls in this embodiment. An injected part50, shown inFIG.3, is formed during an injection phase with such an injection mold. The injected material is distributed continuously over the annular portion46containing the different components to be manufactured, which are connected together by a support56of annular shape. The injected material is also present in a deeper cavity in the central part44of the injection mold, forming a rod or bar54of an injected part50commonly called the “sprue puller insert” by which the injected part50may be handled, and forming injection sprue51in the injection channel of the material, not illustrated. The injected material is also present in the intermediate zone45between this central part44and the annular portion46, forming an injection sheet55of the injected part50. In the region of the connection between the peripheral annular portion46and the intermediate zone45, a cutout line47is formed, said cutting line forming a precutting line57on the resulting injected part. The method thus carries out a step of cutting along this precutting line57, permitting only the support56comprising the components52,53, shown inFIG.4, to be retained. To facilitate the ejection of such an injected part50after the formation thereof, the injection mold also comprises a central ejector4which is designed to cooperate with the bar54of an injected part. It further comprises support56ejectors6, arranged on the annular portion46between the different components to be manufactured. The ejection system of the ejection mold according to the first embodiment thus comprises a plurality of complementary ejectors which are movably mounted inside a first part1of the core of the injection mold, which fulfills a guide function of these ejectors. The intermediate plate10thus comprises complementary cutouts16, which are through-openings, in the region of the ejectors6of the first part of the injection mold core, to permit the displacement thereof through the intermediate plate during an ejection phase, in which the ejectors come into direct contact with the injected part to detach it from the subassembly shown. As has been described, the ejection system may be adapted to the component to be manufactured and may take different forms. However, it is advantageous to have at least one support ejector, irrespective of a component to be manufactured. In order to achieve this, the intermediate plate advantageously comprises at least one second cutout16which is a through-opening and is separate from the first cutout12, this second cutout being designed solely for the passage of an ejector, whilst the primary function of the first cutout is defining the injection cavity which specifically forms a component to be manufactured. In this embodiment, the multiple ejectors act in the region of the component to be manufactured, and in particular separately in the region of a tenon and a surface of the the component surrounding this tenon but also in the region of the support connecting a plurality of components and/or the central part consisting of the injection sheet. This approach makes it possible to avoid the deformation of the injected part during its ejection from the mold and is particularly suitable for parts which have a large surface area and which at the same time are thin. This approach is also particularly suitable for ejecting a fragile blank, for example formed by the injection of ceramic, which comprises particles of ceramic and a binder during the injection thereof. The first part1of the injection mold core thus comprises a plurality of openings in which different ejectors are arranged so as to be mobile in translation. This first part1of the injection mold core forms a guide for these ejectors. The intermediate plate10comprises through-openings in the region of these ejectors which are thus able to pass through said intermediate plate during an ejection phase in order to come into contact with the injected material. As mentioned above, the central part of the injected part50, shown inFIG.3, forming an injection sheet55is generally removed when opening the mold by cutting along the precutting line57and by ejecting by means of the central ejector4. This results in a ring of annular shape, which is illustrated inFIG.4and which comprises a plurality of components fixed to a support56. Once this ring is consolidated, for example by cooling in the case of a ring made of metal or polymer or by removal of the binder and sintering in the case of a ring made of ceramic, the individual components are separated from the support56by flat precision grinding, thus making it possible to define the face66of each of the components. In the described example, the components are winding pawls, more particularly shown inFIGS.8to10. Advantageously, the intermediate plate10is produced by LIGA technology. This approach comprises in the known manner the formation of a mold by photolithography, then the deposition of a metal inside the mold. This LIGA technology is advantageous since it makes it possible to obtain an intermediate plate with a high level of precision, whilst enabling a plurality of identical intermediate plates to be reproduced using the same mask. The flanks of this intermediate plate are important since they define the final shape of the injected component. To achieve this, it could be advantageous to use the teaching from the documents EP3670440 and/or EP3670441 during the manufacture of the intermediate plate10. As a variant, an intermediate plate may also be produced by conventional machining or by wire machining or by stamping or by laser machining in metal plates. The pressures and temperatures of the injection method require materials having a sufficient mechanical resistance (Rm) and a geometric stability at a temperature of at least up to 100° C., even up to 300° C. Thus the intermediate plate may be made of nickel or an alloy of nickel, high-speed steel, ASP® steel produced by powder metallurgy, tungsten carbide or any steel conventionally used for the manufacture of molds. It should be mentioned that the use of the intermediate plate has numerous advantageous. In particular, this intermediate plate defines a significant part of the injection cavity and, in particular, the geometry of the beaks65of the winding pawl60in the present case. This initial geometry, although reworked by finishing steps such as polishing, is essential for the reliable future performance of the component. Thus when the intermediate plate is worn or if slight modifications are required to increase the performance of the timepiece component, it suffices to change the intermediate plate, potentially also the ejectors. This modular construction of the injection mold provides a flexibility in the form of the intermediate plate, the cutout thereof being able to be slightly modified whilst still permitting the passage or one or more ejectors. It is important to emphasize that the described injection mold permits micro-injection. It permits the use of different materials, including polymers, composites, metals or more particularly ceramics. It thus permits the manufacture of a timepiece component, in particular made of ceramic. The invention is not limited to the described embodiment. In particular, more complex geometries may be formed by using a plurality of intermediate plates, in particular two, three or even more. ThusFIGS.5to7illustrate an injection mold according to a second embodiment comprising two intermediate plates. This second embodiment will be illustrated by way of example in the case of the manufacture of a blank and identical components to those described in the example of the injection mold according to the first embodiment. FIG.5thus shows a first part1′ of an injection mold core according to a second embodiment, cooperating with a first intermediate plate10′ and a second intermediate plate20′. These two intermediate plates10′,20′ respectively illustrated inFIGS.6and7, each respectively comprise a cutout12′,22′ (through-openings) and the cutouts12′,22′ thereof are at least partially superposed. The subassembly formed by the first part1′ of the injection mold core and the two intermediate plates10′,20′ is thus based on the superposition of three elements in the injection direction, the second intermediate plate20′ being positioned between the first part1′ of the injection mold core and the first intermediate plate10′. This subassembly defines a cavity which permits the formation of an injected part50′ during an injection phase. It should be mentioned that in such a phase a second part, not shown, of the injection mold core cooperates with this subassembly to form a closed cavity. The second part of the injection mold core is moved closer to or away from the first part in a direction of translation D′, called the injection direction or closing direction of the mold, as mentioned above. The material is injected into the injection cavity in this same direction D′. Advantageously, the two parts of the injection mold core and the intermediate plates have substantially planar surfaces perpendicular to the injection direction. Moreover, different ejectors are mobile in this same injection direction D′. The first part1′ of the injection mold core also comprises a cutout5′ superposed at least partially on the cutouts12′,22′ of the intermediate plates10′,20′ in the injection direction and closing direction of the mold D′. A component ejector3′ is arranged in the cutout5′ of the first part1′. In the injection configuration, this ejector3′ is positioned such that its distal surface is positioned so as to define the height of the tenon53′. This distal surface is thus part of the surface of the injection cavity. In other words, it contributes to defining the shape of a component. This surface is retracted relative to the surfaces of the two intermediate plates10′,20′ defining the injection cavity. This surface makes it possible to define a part of greater thickness of the injected part50′, as will be described in detail below. As illustrated inFIG.6, the cutout12′ of the first intermediate plate10′ is a through-opening. The same also applies to the cutout22′ of the second intermediate plate20′. These two cutouts12′,22′ are also positioned opposite the cutout5′ of the first part1′ such that the ejector3′ is positioned according to a shape substantially corresponding to that of the cutout22′ of the second intermediate plate20′. This cutout22′ of the second intermediate plate20′ contributes to the formation of a part of the injection cavity of the injection mold which more precisely defines a tenon53′ of an injected part50′ to be manufactured. It also permits the displacement of at least one ejector3′ which is designed to act more precisely on this tenon53′, which is principally guided by the first part1′ of the injection mold core and which permits the ejection of a component after an injection phase. The cutout12′ of the first intermediate plate10′ also contributes to the formation of the injection cavity of the injection mold, more specifically a part of the injection cavity defining the component52′ of the injected part50′ apart from the tenon53′. Due to this design of the injection mold, the injection cavity makes it possible to form an injected part50′ similar to that obtained by the injection mold according to the first embodiment, comprising a support56′ which connects the separate components together and does not form part of a component to be manufactured. A component is formed by a first portion52′ of the injected part50′ and by a second portion53′ of the injected part50′ superposed on the ejector3′. It should be mentioned that, in the illustrated example, the injection mold according to the second embodiment is thus used to manufacture a timepiece component and more specifically a winding pawl60, particularly shown inFIGS.8to10, as in the case of the example illustrated by the first embodiment. It is thus apparent that the injection cavity of the injection mold designed to form an injected part50′, which is injection-molded, is principally delimited in the first injection position of the injection mold by:the first cutout12′ of the at least one first intermediate plate10′,20the first cutout22′ of the at least one second intermediate plate20′,an end of an ejector3′, the end of said ejector3′ being located in the thickness of one of the intermediate plates, even exceeding one or more of the intermediate plates, and this ejector3′ being arranged and guided by the first part1′ of the mold core, and bythe second part of the mold core, not shown, which defines the rear face of the injected part50′, for example a planar face. More specifically, the flank of the cutout12′ of the first intermediate plate10′ forms the surface of the cavity of the injection mold which defines the periphery, the lateral surface62, of the timepiece component, apart from the tenon63. The flank of the cutout22′ of the second intermediate plate20′ forms the surface of the cavity of the injection mold which defines the periphery of the tenon of the timepiece component. The upper surface of the second intermediate plate20′ also defines a surface67of the timepiece component and the second part of the mold core defines the rear face of the injected part50′. This second embodiment provides an additional flexibility relative to the first embodiment, since the separate portions of the same component are ultimately defined by the separate and movable intermediate plates of an injection mold. It is thus possible to modify just one of the two portions by modifying a single intermediate plate, without modifying the other portion or the other intermediate plate. On the other hand, only the end of the tenon is defined by a part (ejector3′) which is different from the two intermediate plates. It is thus possible to modify the shape of the component, in particular the shape of a pawl beak, in a manner which is even more versatile and/or in particular to vary the height of the tenon without changing the first part1′ of the injection mold core, by intervening only relative to the intermediate plates and/or the position of the ejector3′. Finally, this second embodiment may be extended to any injection mold comprising at least two intermediate plates which are at least partially superposed, each comprising at least one first cutout delimiting the surfaces of the injection cavity, the first respective cutouts thereof being superposed to define complementary geometries of said injection cavity. These first cutouts and/or further cutouts of the intermediate plates are also designed to permit the passage of the same ejector of an injected part through the at least two intermediate plates. FIGS.6and7illustrate the two intermediate plates10′ and20′. As in the case of the first embodiment, the injection mold is provided to have a substantially cylindrical shape, forming a blank which makes it possible to obtain a green body of annular shape comprising a plurality of components. The intermediate plates thus have a disk shape, comprising cutouts to form a part of the injection cavity and/or for the passage of different ejectors. Thus apart from the above-mentioned first cutouts12′,22′, these intermediate plates10′,20′ respectively comprise additional superposed cutouts16′,26′ permitting the passage of complementary ejectors of an injected part50′, not shown, in a similar manner to the first embodiment, in particular described relative toFIG.2. It should be mentioned that the intermediate plates10′,20′ may be manufactured by the same methods as those described within the context of the first embodiment. Naturally, the invention is not limited to the described embodiments, the plates being able to take different forms from those described. An intermediate plate may have, for example, a blind cutout and/or a positive relief and/or texturization in the region of the injection cavity of the injection mold. Moreover, the intermediate plate(s) is/are movably mounted on an injection mold so as to permit their being changed independently of one another if required. The invention also relates to an intermediate plate per se. Such an intermediate plate comprises at least one first cutout and is designed for the movable arrangement thereof between two mobile parts of an injection mold, such that said at least one first cutout forms a part of the injection cavity of said injection mold. This at least one first cutout is a through-opening. The intermediate plate may also comprise at least one blind cutout and/or at least one texturization of the surface of the plate forming the injection cavity. It may also comprise at least one second cutout to permit the passage of an ejector of an injection mold. It may be present in a material having a mechanical resistance designed to support the pressure of an injection mold and having a geometric stability up to a temperature of at least 100° C., even up to 300° C., in particular made of metal such as a steel, or made of tungsten carbide. The invention also relates to a method for the manufacture of an intermediate plate for an injection mold as described above, wherein it comprises a step of manufacturing the intermediate plate by galvanic deposition, in particular by the LIGA method, or by machining a metal plate, in particular by wire or laser machining, or by stamping. The invention also relates to a method for the manufacture of a timepiece component and a timepiece, in particular a wristwatch, wherein it comprises a step of injecting a material into the injection cavity of an injection mold as described above. Advantageously, such a material is a ceramic-based material, i.e. it comprises at least 50% ceramic by weight. The manufacture of a timepiece may comprise the integration of one or more timepiece components manufactured entirely or partially by injecting material into an injection mold according to the invention, as described above. The method for the manufacture of a timepiece component may comprise a step consisting of selecting at least one intermediate plate of said injection mold from different intermediate plates adapted to the injection mold in order to determine the geometry of the component. The method for the manufacture of a timepiece component may also comprise a separate step prior to the manufacture of said intermediate plate by galvanic deposition, in particular by the LIGA method, or by machining a metal plate, in particular by wire or laser machining, or by stamping. The invention has been implemented within the context of the manufacture of a timepiece component. It could also be applied to the manufacture of any component of small dimensions, i.e. generally in the field of micro-injection. In an even more general manner, the solution could be implemented by any injection mold, irrespective of its dimensions. | 27,032 |
11858179 | While implementations are described herein by way of example, those skilled in the art will recognize that the implementations are not limited to the examples or drawings described. It should be understood that the drawings and detailed description thereto are not intended to limit implementations to the particular form disclosed but, on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to. DETAILED DESCRIPTION Systems and methods to form cast or molded components using self-skinning foam materials having one or more negative space spars are described. In addition, systems and methods to form cast or molded components having one or more surfaces with resin coatings are also described. Further, systems and methods to form cast or molded components having any desired shapes or forms using thermally expanding mandrels are described. In example embodiments, various cast or molded components, such as wings, beams, or other components for aerial vehicles, may be formed using self-skinning foam materials by the systems and methods described herein. For example, a wing may be formed using a self-skinning foam material composition that is injected into a molding tool. In addition, the wing may include one or more negative space spars that are formed using one or more mandrels inserted into the molding tool. During expansion and curing of the self-skinning foam material composition, an external skin may be formed on an exterior surface of the wing, and/or one or more internal skins may be formed on interior surfaces of the one or more negative space spars. The one or more negative space spars may reduce material usage and weight of the wing, while the external skin and/or internal skins may increase structural strength of the wing. In other example embodiments, various cast or molded components, such as wings, beams, or other components for aerial vehicles, may be formed having one or more surfaces with resin coatings by the systems and methods described herein. For example, a wing may be formed using a foam material composition that is injected into a molding tool. In addition, the wing may include one or more negative space spars that are formed using one or more mandrels inserted into the molding tool. Further, the molding tool and/or the one or more mandrels may include resin coatings that are applied on their respective surfaces. During expansion and curing of the foam material composition, the resin coatings may also be cured, e.g., by application of heat, such that an external skin may be formed from a resin coating on an exterior surface of the wing, and/or one or more internal skins may be formed from resin coatings on interior surfaces of the one or more negative space spars. The one or more negative space spars may reduce material usage and weight of the wing, while the external skin and/or internal skins may increase structural strength of the wing. In additional example embodiments, the external skin formed on an exterior surface of a component and/or the internal skins formed on interior surfaces of the one or more negative space spars of a component may also include various surface features, such as corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. The various surface features may be formed from corresponding surface features on the molding tool and/or the one or more mandrels, and such surface features may further increase the structural strength of cast or molded components. In further example embodiments, various cast or molded components, such as wings, beams, or other components for aerial vehicles, may be formed using thermally expanding mandrels by the systems and methods described herein. For example, a thermally expanding mandrel may be formed from a thermally expanding material composition, such as micronized rubber particles and gypsum plaster. The thermally expanding mandrel may be formed in any desired shape or form. Then, component material, such as carbon fiber strips or tape, may be applied to the thermally expanding mandrel and inserted into a molding tool. Upon the application of heat to the thermally expanding mandrel and/or the molding tool, the mandrel may expand and apply pressure to the component material, and the component material may be cured to form the component. Upon completion of expansion and curing of the component, the component may be removed from the molding tool, and the thermally expanding material composition of the thermally expanding mandrel may be washed out of the component, e.g., using water, and at least partially reused or recycled in the systems and methods described herein. The thermally expanding mandrel may allow the formation of components having any desired shape or form within molding tools, while also facilitating reuse and/or recycling of the thermally expanding material composition. FIG.1Ais a schematic, perspective view diagram of a cast or molded component102, according to an implementation100. In example embodiments, the cast or molded component102may be a wing, beam, or other component of an aerial vehicle. In other example embodiments, the cast or molded component102may be any other component, e.g., a beam, spar, rod, tube, or other component, of any other type of vehicle, machine, structure, device, or system. AlthoughFIGS.1A-1Edepict an example wing as the cast or molded component102, the systems and methods described herein are not limited to wings or other components of aerial vehicles. The cast or molded component102may be formed from a foam material composition, e.g., urethane or polyurethane foams. In example embodiments, the foam material composition may be an expanding foam material that expands and cures substantially at room temperature. For example, the foam material composition may expand and cure without the application of heat to the foam material composition. In alternative embodiments, the foam material composition may be an expanding foam material that expands and cures upon application of heat. For example, the foam material composition may expand and cure at a faster rate upon application of heat as compared to the rate of expansion and curing substantially at room temperature. In further example embodiments, the foam material composition may be a self-skinning foam material composition. For example, upon expansion of the foam material composition that results in a pressure increase at interfaces between the foam material composition and one or more surfaces of a molding tool and/or between the foam material composition and one or more surfaces of one or more mandrels inserted into the molding tool, an external skin may be formed on an exterior surface of the component102and/or one or more internal skins may be formed on one or more interior surfaces of the component102. Example foam material compositions may include four-pound self-skinning foams, two-pound self-skinning foams, or other types of self-skinning foams. For example, a four-pound foam indicates an expansion ratio of approximately four pounds per cubic foot, a two-pound foam indicates an expansion ratio of approximately two pounds per cubic foot, etc. In some example embodiments, self-skinning foams may form skins on their exterior surface facing a molding tool, or interior surfaces facing mandrels, upon generation of at least approximately 10% overvolume pressure within a molding tool. In alternative embodiments, different percentages of overvolume pressure may be generated within the molding tool, e.g., approximately 5%, approximately 8%, approximately 12%, or approximately 15%, or within a range of approximately 5% to approximately 20% overvolume pressure. In some example embodiments, the foam material composition may be modified with the addition of water or other additives to affect the expansion ratio. For example, the addition of approximately five drops of water to approximately 160 grams of two-pound self-skinning foam may increase the expansion ratio by approximately 30%. As shown inFIG.1A, the cast or molded component102may also include one or more negative space spars105. The negative space spars105may be formed by the insertion of one or more mandrels into a molding tool that forms the cast or molded component102. Each of the negative space spars105may have any desired shape based at least in part on a shape of a corresponding mandrel. AlthoughFIG.1Ashows three negative space spars105within the cast or molded component102, any other number or arrangement of negative space spars105may be included in the component102. For example, the component102may include only a single negative space spar105in any position, or the component102may include multiple negative space spars in any arrangement. In example embodiments, the negative space spars105may reduce material usage and weight of the component102. Further, by forming the component102using a self-skinning foam material composition, one or more skins may be formed on an exterior surface and/or one or more interior surfaces, and the one or more skins may increase structural strength of the component102. For example, interlaminar shear strength between the one or more skins and other portions of the foam material composition may contribute to the increased structural strength of the component102. FIG.1Bis a schematic, cross-sectional view diagram of a cast or molded component102taken along line A-A′ shown inFIG.1A, according to an implementation. FIG.1Bshows a component102formed of a foam material composition103, e.g., a self-skinning foam material composition. An external skin104may be formed on an exterior surface of the component102upon expansion and curing of the foam material composition that results in a pressure increase at an interface between the foam material composition and surfaces of a molding tool. In addition, as shown inFIG.1B, the component102may include three negative space spars105a,105b,105cformed by the insertion of corresponding mandrels into the molding tool. Three internal skins106a,106b,106cmay be formed on interior surfaces of the three negative space spars105a,105b,105cof the component102upon expansion and curing of the foam material composition that results in a pressure increase at interfaces between the foam material composition and surfaces of corresponding mandrels. In example embodiments, the negative space spars105a,105b,105cmay reduce material usage and weight of the component102. Further, by forming the component102using a self-skinning foam material composition, an external skin104may be formed on an exterior surface and one or more internal skins106a,106b,106cmay be formed on one or more interior surfaces, and the one or more skins104,106a,106b,106cmay increase structural strength of the component102. For example, interlaminar shear strength between the one or more skins and other portions of the foam material composition may contribute to the increased structural strength of the component102. FIG.1Cis a schematic, partial cross-sectional view diagram of a cast or molded component102taken along line B-B′ shown inFIG.1B, according to an implementation. As shown inFIG.1C, the component102may also be formed with one or more surface features108formed on one or more internal skins106of the component102. The surface features108may be formed by corresponding surface features included in surfaces of corresponding mandrels. The various surface features108may include corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. FIG.1Cshows three ribs108a,108b,108cformed on the internal skin106bof the negative space spar105b. The three ribs108a,108b,108care shown as indentations that extend a greater depth into the foam material composition103than a remainder of the internal skin106b. In other example embodiments, the ribs108may be formed as protrusions that extend a lesser depth into the foam material composition103than a remainder of the internal skin106b. In still other example embodiments, the internal skin106bmay include any other type, number, or arrangement of surface features. AlthoughFIG.1Cshows surface features only on internal skin106bwithin negative space spar105b, surface features of any type, number, or arrangement may also be formed on internal skins106of any other negative space spars105. FIG.1Dis another schematic, cross-sectional view diagram of a cast or molded component102taken along line A-A′ shown inFIG.1A, according to an implementation. As shown inFIG.1D, the component102may also be formed with one or more other surface features110formed on one or more internal skins106of the component102. The surface features110may be formed by corresponding surface features included in surfaces of corresponding mandrels. The various surface features110may include corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. FIG.1Dshows three sets of corrugations110a,110b,110cformed on the internal skins106a,106b,106cof the negative space spars105a,105b,105c. The three sets of corrugations110a,110b,110care shown as peaks and valleys, or ridges and grooves, that extend along a long axis of the component102, e.g., along a length of a wing. In other example embodiments, the internal skins106a,106b,106cmay include any other type, number, or arrangement of surface features. AlthoughFIG.1Dshows the same or similar surface features on internal skins106a,106b,106cwithin negative space spars105a,105b,105c, different or dissimilar surface features of any type, number, or arrangement may also be formed on internal skins106of the negative space spars105. The various surface features may be formed by corresponding surface features included in surfaces of corresponding mandrels. In some example embodiments, the mandrels may be inflatable, adjustable, or expanding mandrels, or may include inflatable, adjustable, or expanding portions therein, in order to form the surface features on the internal skins106of the negative space spars105. For example, inflatable mandrels or inflatable portions of mandrels may expand in size upon injection of fluid, e.g., gas or liquid, into the mandrel and may reduce in size upon removal of the fluid. Adjustable mandrels or adjustable portions of mandrels may include movable or actuatable portions to selectively modify a shape or surface of the mandrel. Expanding mandrels or expanding portions of mandrels may expand or reduce in size upon a change in condition, e.g., change in temperature, such as the thermally expanding mandrels described herein. AlthoughFIGS.1C and1Dshow various surface features formed on one or more internal skins106of the component102, various surface features may also be formed on an external skin104of the component102by corresponding surface features included in surfaces of a molding tool. The various surface features may be formed by corresponding surface features included in surfaces of corresponding molding tools. In some example embodiments, the molding tools may be inflatable, adjustable, or expanding molding tools, or may include inflatable, adjustable, or expanding portions therein, in order to form the surface features on the external skins104of the components102. For example, inflatable molding tools or inflatable portions of molding tools may expand in size upon injection of fluid, e.g., gas or liquid, into the molding tools and may reduce in size upon removal of the fluid. Adjustable molding tools or adjustable portions of molding tools may include movable or actuatable portions to selectively modify a shape or surface of the molding tools. Expanding molding tools or expanding portions of molding tools may expand or reduce in size upon a change in condition, e.g., change in temperature. In example embodiments, the various surface features included on the internal skins106of the negative space spars105and/or the external skin104of the component102may further increase surface area of contact between the one or more skins and other portions of the foam material composition, thereby increasing interlaminar shear strength between the one or more skins and other portions of the foam material composition to further contribute to the increased structural strength of the component102. Moreover, with the inclusion of surface features on one or more skins of the component102that increase structural strength, wall thicknesses between two or more skins of the component may be further reduced, thereby further reducing material usage and weight of the component102while increasing structural strength. FIG.1Eis yet another schematic, cross-sectional view diagram of a cast or molded component102taken along line A-A′ shown inFIG.1A, according to an implementation. As shown inFIG.1E, the component102may also be formed with one or more support materials112included at least partially within or attached or adhered to the foam material composition103. For example, the support materials112may include a beam, rod, spar, or other structural support. In example embodiments, the support materials112may be inserted into a molding tool and be surrounded by and molded into the foam material composition103. In other example embodiments, the support materials112may be inserted into, attached to, or adhered to a cast or molded component102after the foam material composition has expanded and cured to form the component102. The support materials112may be formed of various types of materials, such as metals, plastics, woods, ceramics, polymers, or any other materials, or combinations thereof. In addition, the support materials112may have any desired shape. FIG.1Eshows two support materials112a,112bincluded within the foam material composition103. For example, the support material112amay have an I-beam shape, and the support material112bmay have a Z-spar shape. In other example embodiments, the component102may include any other type, shape, number, or arrangement of support materials112. In example embodiments, the various support materials included at least partially within or attached or adhered to the foam material composition may further contribute to the increased structural strength of the component102. Moreover, with the inclusion of support materials as part of the component102that increase structural strength, wall thicknesses of one or more portions of the component may be further reduced, thereby further reducing material usage and weight of the component102while increasing structural strength. WhileFIGS.1A-1Edescribe various aspects of cast or molded components102individually, the various features described with respect toFIGS.1A-1Emay be combined in various combinations. For example, external skins104of components102and/or internal skins106of negative space spars105may include combinations of various surface features, such as both ribs108and corrugations110as described with respect toFIGS.1C and1D. In addition, a first portion of a component102may include skins104,106with various surface features, and a second portion of a component102may include support materials112. Various other combinations of the various features described with respect toFIGS.1A-1Emay also be included in cast or molded components102. FIG.2is a flow diagram illustrating an example component with negative space spar(s) formation process200, according to an implementation. The process200may begin by preparing a molding tool and/or one or more mandrels, as at202. For example, one or more release agents may be applied to the molding tool and/or the one or more mandrels such that a cast or molded component may be removed from the molding tool and/or the one or more mandrels may be removed from the component upon completion of the process200. Further, the molding tool and/or the one or more mandrels may be designed with various draft angles to facilitate removal of a cast or molded component from the molding tool and/or removal of the one or more mandrels from the component. Moreover, the molding tool and/or the one or more mandrels may include various surface features as described herein, in order to create corresponding surface features on exterior and/or interior surfaces of the component, such as corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. The molding tool may be a single-part, two-part, or multi-part molding tool, and the molding tool may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, or combinations thereof. In addition, the one or more mandrels may be solid or rigid mandrels, or inflatable, adjustable, or expanding mandrels, and the one or more mandrels may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, plastics, silicone, rubber, or combinations thereof. The process200may continue by inserting the one or more mandrels into the molding tool, as at204. For example, the one or more mandrels may be placed in position within or relative to the molding tool, in order to form the component with desired negative space spars at particular positions and/or with a particular arrangement. The process200may then proceed by preparing the foam material composition, as at206. For example, the foam material composition may be a self-skinning foam material composition, as described herein. In addition, the foam material composition may be a two-part or multi-part composition of different materials that may begin to expand and cure upon mixing of the different materials. In some embodiments, the foam material composition may expand and cure within seconds, e.g., 10-59 seconds, or minutes, e.g., 1-30 minutes. In other embodiments, the foam material composition may expand and cure over a shorter or longer duration of time. In example embodiments, the foam material composition may be cooled, e.g., down to about 15 degrees Fahrenheit, in order to slow the expansion and curing of the foam material composition and thereby increase the duration of time during which the foam material composition may be prepared and handled. Further, the foam material composition may be modified with the addition of water or other additives to affect the expansion ratio. The process200may then continue by injecting the foam material composition into the molding tool, as at208. In addition, the foam material composition may by injected around the one or more mandrels that are inserted into or placed relative to the molding tool. For example, the foam material composition may be metered into the molding tool such that a precisely measured or determined amount of foam material composition is injected into the molding tool. The amount of foam material composition to be injected or metered may be determined based at least in part on a volume of the cast or molded component and a desired overvolume pressure to be generated during expansion and curing of the foam material composition within the molding tool. As described herein, in some example embodiments, self-skinning foams may form skins on their exterior surface facing the molding tool, and/or on their interior surfaces facing one or more mandrels, upon generation of at least approximately 10% overvolume pressure within the molding tool. In alternative embodiments, different percentages of overvolume pressure may be generated within the molding tool, e.g., approximately 5%, approximately 8%, approximately 12%, or approximately 15%, or within a range of approximately 5% to approximately 20% overvolume pressure. In further example embodiments, prior to injecting the foam material composition into the molding tool, one or more support materials as described herein may be inserted or placed into the molding tool. The process200may then proceed by closing the molding tool, as at210. The closed molding tool may substantially seal the foam material composition within the molding tool and/or around the one or more mandrels. In some example embodiments, prior to closing the molding tool, one or more support materials as described herein may be inserted or placed into the molding tool. Then, the process200may continue by allowing the foam material composition to expand and cure, as at212. Within the closed molding tool, the foam material composition may generate a desired overvolume pressure, such that one or more skins may be formed on surfaces of the component. After completion of the expansion and curing of the foam material composition, the process200may continue by opening the molding tool, as at214. Then, the process200may proceed by removing the component from the molding tool, as at216, and by removing the one or more mandrels from the component, as at218. For example, the release agents and/or draft angles of the molding tool may facilitate removal of the component from the molding tool. Likewise, the release agents and/or draft angles of the one or more mandrels may facilitate removal of the one or more mandrels from the component. In some example embodiments, after removing the component from the molding tool and/or after removing the one or more mandrels from the component, one or more support materials as described herein may be inserted, attached, or adhered to the component. The process200may then end, as at220. The cast or molded component may include an external skin on an exterior surface of the component, and may also include one or more internal skins on interior surfaces of one or more negative space spars formed by the one or more mandrels. As described herein, the one or more negative space spars may reduce material usage and weight of the component, and interlaminar shear strength between the one or more skins and other portions of the foam material composition may contribute to the increased structural strength of the component. All or portions of the process200described herein may be performed by automated or semi-automated machinery that is controlled and/or programmed to perform one or more steps of the process200. For example, automated or semi-automated machinery or robotics may prepare the molding tool and/or the one or more mandrels for molding components, and/or may insert the one or more mandrels into the molding tool. In addition, automated or semi-automated machinery or robotics may prepare the foam material composition, and/or may inject or meter the foam material composition into the molding tool. Further, automated or semi-automated machinery or robotics may close the molding tool to allow the foam material composition to expand and cure, and/or may open the molding tool upon completion. Moreover, automated or semi-automated machinery or robotics may remove the component from the molding tool, and/or may remove the one or more mandrels from the component. FIG.3Ais a schematic, perspective view diagram of a cast or molded component302, according to an implementation300. In example embodiments, the cast or molded component302may be a wing, beam, or other component of an aerial vehicle. In other example embodiments, the cast or molded component302may be any other component, e.g., a beam, spar, rod, tube, or other component, of any other type of vehicle, machine, structure, device, or system. AlthoughFIGS.3A and3Bdepict an example wing as the cast or molded component302, the systems and methods described herein are not limited to wings or other components of aerial vehicles. The cast or molded component302may be formed from a foam material composition, e.g., urethane or polyurethane foams, and one or more resin coatings, e.g., urethane resins, on surfaces of the component. In example embodiments, the foam material composition may be an expanding foam material that expands and cures substantially at room temperature. For example, the foam material composition may expand and cure without the application of heat to the foam material composition. In alternative embodiments, the foam material composition may be an expanding foam material that expands and cures upon application of heat. For example, the foam material composition may expand and cure at a faster rate upon application of heat as compared to the rate of expansion and curing substantially at room temperature. Example foam material compositions may include four-pound foams, two-pound foams, or other types of foams. For example, a four-pound foam indicates an expansion ratio of approximately four pounds per cubic foot, a two-pound foam indicates an expansion ratio of approximately two pounds per cubic foot, etc. In some example embodiments, the foam material composition may be modified with the addition of water or other additives to affect the expansion ratio. For example, the addition of approximately five drops of water to approximately 160 grams of two-pound foam may increase the expansion ratio by approximately 30%. The one or more resin coatings on surfaces of the component302may be formed from urethane resins. In some example embodiments, the urethane resins may be modified with microspheres or other additives to reduce the weight of the urethane resins. For example, the microspheres may be hollow glass, plastic, or polymer microspheres or microbeads. In further example embodiments, the urethane resins may be modified with pigments or other coloring agents in order to form a component having a desired color. In example embodiments, the resin coatings may cure substantially at room temperature. For example, the resin coatings may cure without the application of heat to the resin coatings. In alternative embodiments, the resin coatings may be cured with the application of heat. For example, the resin coatings may cure at a faster rate upon application of heat as compared to the rate of curing substantially at room temperature. The one or more resin coatings may be applied to surfaces of a molding tool and/or one or more mandrels, and the one or more resin coatings may be cured, e.g., upon application of heat. For example, the molding tool and/or the one or more mandrels may be heated, e.g., placed in an oven, in order to cure the resin coatings. The curing of the resin coatings may at least partially overlap with the expansion and curing of the foam material composition. As a result, the resin coatings may form an external skin on an exterior surface of the component302, and/or one or more internal skins on interior surfaces of the component302. As described herein, one or more negative space spars formed by the one or more mandrels may reduce material usage and weight of the component302, and interlaminar shear strength between the one or more skins formed by resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component302. As shown inFIG.3A, the cast or molded component302may also include one or more negative space spars305. The negative space spars305may be formed by the insertion of one or more mandrels into a molding tool that forms the cast or molded component302. Each of the negative space spars305may have any desired shape based at least in part on a shape of a corresponding mandrel. AlthoughFIG.3Ashows three negative space spars305within the cast or molded component302, any other number or arrangement of negative space spars305may be included in the component302. For example, the component302may include only a single negative space spar305in any position, or the component302may include multiple negative space spars in any arrangement. In example embodiments, the negative space spars305may reduce material usage and weight of the component302. Further, by forming the component302using resin coatings around a foam material composition, one or more skins may be formed by the resin coatings on an exterior surface and/or one or more interior surfaces, and the one or more skins may increase structural strength of the component302. For example, interlaminar shear strength between the one or more skins formed by the resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component302. FIG.3Bis a schematic, cross-sectional view diagram of a cast or molded component302taken along line A-A′ shown inFIG.3A, according to an implementation. FIG.3Bshows a component302formed of a foam material composition303, e.g., an expanding foam material composition. An external skin304may be formed on an exterior surface of the component302by a resin coating applied to surfaces of a molding tool and cured, e.g., by application of heat, at least partially during expansion and curing of the foam material composition within the molding tool. In addition, as shown inFIG.3B, the component302may include three negative space spars305a,305b,305cformed by the insertion of corresponding mandrels into the molding tool. Three internal skins306a,306b,306cmay be formed on interior surfaces of the three negative space spars305a,305b,305cof the component302by resin coatings applied to surfaces of the corresponding mandrels and cured, e.g., by application of heat, at least partially during expansion and curing of the foam material composition within the molding tool. In example embodiments, the negative space spars305a,305b,305cmay reduce material usage and weight of the component302. Further, by forming the component302using an expanding foam material composition and resin coatings on one or more surfaces, an external skin304may be formed on an exterior surface by a resin coating and one or more internal skins306a,306b,306cmay be formed on one or more interior surfaces by resin coatings, and the one or more skins304,306a,306b,306cmay increase structural strength of the component302. For example, interlaminar shear strength between the one or more skins formed by the resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component302. As described herein with respect toFIGS.1C-1E, the component302shown inFIGS.3A and3Bmay also be formed with one or more surface features formed on one or more internal skins306of the component302. The surface features may be formed by corresponding surface features included in surfaces of corresponding mandrels. The various surface features may include corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. In example embodiments, surface features of any type, number, or arrangement may be formed on internal skins306of any negative space spars305. The various surface features may be formed by corresponding surface features included in surfaces of corresponding mandrels. In some example embodiments, the mandrels may be inflatable, adjustable, or expanding mandrels, or may include inflatable, adjustable, or expanding portions therein, in order to form the surface features on the internal skins306of the negative space spars305. For example, inflatable mandrels or inflatable portions of mandrels may expand in size upon injection of fluid, e.g., gas or liquid, into the mandrel and may reduce in size upon removal of the fluid. Adjustable mandrels or adjustable portions of mandrels may include movable or actuatable portions to selectively modify a shape or surface of the mandrel. Expanding mandrels or expanding portions of mandrels may expand or reduce in size upon a change in condition, e.g., change in temperature, such as the thermally expanding mandrels described herein. In further example embodiments, as described herein with respect toFIGS.1C-1E, the component302shown inFIGS.3A and3Bmay also be formed with various surface features formed on an external skin304of the component302. The surface features may be formed by corresponding surface features included in surfaces of a molding tool. The various surface features may include corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. In example embodiments, surface features of any type, number, or arrangement may be formed on the external skin304of the component302. The various surface features may be formed by corresponding surface features included in surfaces of corresponding molding tools. In some example embodiments, the molding tools may be inflatable, adjustable, or expanding molding tools, or may include inflatable, adjustable, or expanding portions therein, in order to form the surface features on the external skins304of the components302. For example, inflatable molding tools or inflatable portions of molding tools may expand in size upon injection of fluid, e.g., gas or liquid, into the molding tools and may reduce in size upon removal of the fluid. Adjustable molding tools or adjustable portions of molding tools may include movable or actuatable portions to selectively modify a shape or surface of the molding tools. Expanding molding tools or expanding portions of molding tools may expand or reduce in size upon a change in condition, e.g., change in temperature. In example embodiments, the various surface features included on the internal skins306of the negative space spars305and/or the external skin304of the component302may further increase surface area of contact between the one or more skins formed by resin coatings and portions of the foam material composition, thereby increasing interlaminar shear strength between the one or more skins and portions of the foam material composition to further contribute to the increased structural strength of the component302. Moreover, with the inclusion of surface features on one or more skins of the component302that increase structural strength, wall thicknesses between two or more skins of the component may be further reduced, thereby further reducing material usage and weight of the component302while increasing structural strength. In still further example embodiments, as described herein with respect toFIGS.1C-1E, the component302shown inFIGS.3A and3Bmay also be formed with one or more support materials included at least partially within or attached or adhered to the foam material composition303. For example, the support materials may include a beam, rod, spar, or other structural support. In example embodiments, the support materials may be inserted into a molding tool and be surrounded by and molded into the foam material composition303. In other example embodiments, the support materials may be inserted into, attached to, or adhered to a cast or molded component302after the foam material composition has expanded and cured to form the component302. The support materials may be formed of various types of materials, such as metals, plastics, woods, ceramics, polymers, or any other materials, or combinations thereof. In addition, the support materials may have any desired shape. In example embodiments, the component302may include any type, shape, number, or arrangement of support materials. In example embodiments, the various support materials included at least partially within or attached or adhered to the foam material composition may further contribute to the increased structural strength of the component302. Moreover, with the inclusion of support materials as part of the component302that increase structural strength, wall thicknesses of one or more portions of the component may be further reduced, thereby further reducing material usage and weight of the component302while increasing structural strength. While the description with respect toFIGS.3A and3Bdescribes various aspects of cast or molded components302individually, the various features described herein may be combined in various combinations. For example, external skins304of components302and/or internal skins306of negative space spars305may include combinations of various surface features, such as both ribs and corrugations. In addition, a first portion of a component302may include skins304,306with various surface features, and a second portion of a component302may include support materials. Various other combinations of the various features described herein may also be included in cast or molded components302. FIG.4is a flow diagram illustrating an example component with negative space spar(s) and resin coating formation process400, according to an implementation. The process400may begin by preparing a molding tool and/or one or more mandrels, as at402. For example, one or more release agents may be applied to the molding tool and/or the one or more mandrels such that a cast or molded component may be removed from the molding tool and/or the one or more mandrels may be removed from the component upon completion of the process400. Further, the molding tool and/or the one or more mandrels may be designed with various draft angles to facilitate removal of a cast or molded component from the molding tool and/or removal of the one or more mandrels from the component. Moreover, the molding tool and/or the one or more mandrels may include various surface features as described herein, in order to create corresponding surface features on exterior and/or interior surfaces of the component, such as corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. The molding tool may be a single-part, two-part, or multi-part molding tool, and the molding tool may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, or combinations thereof. In addition, the one or more mandrels may be solid or rigid mandrels, or inflatable, adjustable, or expanding mandrels, and the one or more mandrels may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, plastics, silicone, rubber, or combinations thereof. The process400may continue by preparing a resin coating material, as at404. For example, the resin coating material may be a urethane resin that cures upon application of heat and/or substantially at room temperature. In addition, the resin coating material may be modified with additives such as microspheres or microbeads to reduce the weight of the resin coating material. Further, the resin coating material may be modified with pigments or coloring agents to form a component with a desired color. The process400may then proceed by applying the resin coating material to the molding tool and/or one or more mandrels, as at406. For example, the resin coating material may be applied to surfaces of the molding tool and/or the one or more mandrels in a thin layer, e.g., having a thickness of approximately 15-20 thousandths of an inch. In other example embodiments, the resin coating material may be applied in layers having different thicknesses, as desired, that may affect the resultant weight and/or strength of the component. The process400may continue by inserting the one or more mandrels into the molding tool, as at408. For example, the one or more mandrels may be placed in position within or relative to the molding tool, in order to form the component with desired negative space spars at particular positions and/or with a particular arrangement. The process400may then proceed by preparing the foam material composition, as at410. For example, the foam material composition may be an expanding foam material composition, as described herein. In addition, the foam material composition may be a two-part or multi-part composition of different materials that may begin to expand and cure upon mixing of the different materials. In some embodiments, the foam material composition may expand and cure within seconds, e.g., 10-59 seconds, or minutes, e.g., 1-30 minutes. In other embodiments, the foam material composition may expand and cure over a shorter or longer duration of time. In example embodiments, the foam material composition may be cooled, e.g., down to about 15 degrees Fahrenheit, in order to slow the expansion and curing of the foam material composition and thereby increase the duration of time during which the foam material composition may be prepared and handled. Further, the foam material composition may be modified with the addition of water or other additives to affect the expansion ratio. The process400may then continue by injecting the foam material composition into the molding tool, as at412. In addition, the foam material composition may by injected around the one or more mandrels that are inserted into or placed relative to the molding tool. For example, the foam material composition may be metered into the molding tool such that a precisely measured or determined amount of foam material composition is injected into the molding tool. The amount of foam material composition to be injected or metered may be determined based at least in part on a volume of the cast or molded component and a desired overvolume pressure to be generated during expansion and curing of the foam material composition within the molding tool. In further example embodiments, prior to injecting the foam material composition into the molding tool, one or more support materials as described herein may be inserted or placed into the molding tool. The process400may then proceed by closing the molding tool, as at414. The closed molding tool may substantially seal the foam material composition within the molding tool and/or around the one or more mandrels. In some example embodiments, prior to closing the molding tool, one or more support materials as described herein may be inserted or placed into the molding tool. Then, the process400may continue by allowing the foam material composition to expand and cure, as at416. Within the closed molding tool, the foam material composition may generate a desired overvolume pressure. In some example embodiments, the foam material composition may expand and generate at least approximately 10% overvolume pressure within the molding tool. In alternative embodiments, different percentages of overvolume pressure may be generated within the molding tool, e.g., approximately 5%, approximately 8%, approximately 12%, or approximately 15%, or within a range of approximately 5% to approximately 20% overvolume pressure. The process400may then continue by applying heat to the closed molding tool and/or the one or more mandrels to cure the resin coating material, as at418. For example, heat may be applied by placing the molding tool and/or the one or more mandrels in a curing oven. Alternatively, heat may be applied to the molding tool and/or one or more mandrels by other methods, such as by direct application of heat to one or more portions of the molding tool and/or one or more mandrels. Some example resin coating materials may have a cure time of approximately a few hours at approximately 150 degrees Fahrenheit. In other example embodiments, other combinations of curing temperatures and curing times may be used based at least in part on properties of the resin coating material. The curing of the resin coating material may at least partially overlap with the expansion and curing of the foam material composition, in order to increase the interlaminar shear strength between the resin coating material and the foam material composition. In alternative embodiments, the resin coating materials may cure substantially at room temperature, e.g., without the application of heat. For example, the resin coating materials may have a cure time of approximately a few hours or a few days substantially at room temperature. After completion of the expansion and curing of the foam material composition and completion of the curing of the resin coating material, the process400may continue by stopping the application of heat to the closed molding tool and/or the one or more mandrels, as at420, and by opening the molding tool, as at422. Then, the process400may proceed by removing the component from the molding tool, as at424, and by removing the one or more mandrels from the component, as at426. For example, the release agents and/or draft angles of the molding tool may facilitate removal of the component from the molding tool. Likewise, the release agents and/or draft angles of the one or more mandrels may facilitate removal of the one or more mandrels from the component. In some example embodiments, after removing the component from the molding tool and/or after removing the one or more mandrels from the component, one or more support materials as described herein may be inserted, attached, or adhered to the component. The process400may then end, as at428. The cast or molded component may include an external skin formed by a resin coating on an exterior surface of the component, and may also include one or more internal skins formed by resin coatings on interior surfaces of one or more negative space spars formed by the one or more mandrels. As described herein, the one or more negative space spars may reduce material usage and weight of the component, and interlaminar shear strength between the one or more skins formed by resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component. All or portions of the process400described herein may be performed by automated or semi-automated machinery that is controlled and/or programmed to perform one or more steps of the process400. For example, automated or semi-automated machinery or robotics may prepare the molding tool and/or the one or more mandrels for molding components, and/or may insert the one or more mandrels into the molding tool. Further, automated or semi-automated machinery or robotics may prepare the resin coating material, and/or may apply the resin coating material to the molding tool and/or the one or more mandrels. In addition, automated or semi-automated machinery or robotics may prepare the foam material composition, and/or may inject or meter the foam material composition into the molding tool. Further, automated or semi-automated machinery or robotics may close the molding tool to allow the foam material composition to expand and cure, may apply heat to the closed molding tool and/or the one or more mandrels to cure the resin coating material, and/or may open the molding tool upon completion. Moreover, automated or semi-automated machinery or robotics may remove the component from the molding tool, and/or may remove the one or more mandrels from the component. FIG.5Ais a schematic, perspective view diagram of a thermally expanding mandrel505, according to an implementation500. As shown inFIG.5A, the thermally expanding mandrel505may be formed in any desired shape, e.g., a wing, beam, or other component of an aerial vehicle, or any other beam, spar, rod, tube, or other component, of any other type of vehicle, machine, structure, device, or system. The thermally expanding mandrel505may be formed of a material composition507that facilitates expansion of the mandrel505upon application of heat. For example, the material composition507of the mandrel505may include thermally expanding particles and binder material. In example embodiments, the thermally expanding particles may include micronized rubber particles, e.g., +/−80 mesh micron rubber dust. In alternative embodiments, the mandrel505may be formed from micronized rubber powder, silicone rubber microspheres, silicone rubber powder, or other thermally expanding particles. In still further embodiments, the mandrel505may be formed from combinations of different types of thermally expanding particles. Some or all of the thermally expanding particles may be recycled, recyclable, and/or reusable materials, such as micronized rubber particles formed from crushed and ground rubber tires. In example embodiments, the binder material may include gypsum plaster. In alternative embodiments, the mandrel505may be formed from other types of binder material, such as other water-soluble binder materials. In still further embodiments, the mandrel505may be formed from combinations of different types of binder materials. Some or all of the binder materials may be recycled, recyclable, and/or reusable materials, such as gypsum plaster. The thermally expanding particles may have a relatively high coefficient of thermal expansion (CTE). For example, the thermally expanding particles may have a CTE that is higher than a CTE of materials of a molding tool at least partially inside of which the mandrel505is to be used and/or placed. As described herein, the molding tool may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, or combinations thereof. The thermally expanding particles of the mandrel505and materials of the molding tool may be selected such that the CTE of the thermally expanding particles of the mandrel505is higher than the CTE of the materials of the molding tool. In this manner, the mandrel505may, upon application of heat, expand at a faster rate than the molding tool, such that the mandrel505may apply pressure to component materials applied thereto against surfaces of the molding tool. In some example embodiments, expansion of the thermally expanding particles of the mandrel505may generate at least approximately 10% overvolume pressure within the molding tool. In alternative embodiments, different percentages of overvolume pressure may be generated within the molding tool, e.g., approximately 5%, approximately 8%, approximately 12%, or approximately 15%, or within a range of approximately 5% to approximately 20% overvolume pressure. The thermally expanding mandrel505may be formed using various methods and processes. For example, the mandrel505may be formed using molding processes by injecting or metering the material composition507into a mold of any desired shape and curing or otherwise hardening the material composition507. In other example embodiments, the mandrel505may be formed using 3-D printing processes by printing, applying, or building up the material composition507into any desired shape. In further example embodiments, the mandrel505may be formed using machining processes by forming a blank of material from the material composition507and cutting, turning, drilling, grinding, polishing, or otherwise machining the material composition507into any desired shape. Moreover, the mandrel505may be formed using any of these processes, other processes, or combinations of different processes. In example embodiments, the cast or molded components described herein, such as a wing, beam, or other component of an aerial vehicle, may be at least partially formed using a thermally expanding mandrel505to form one or more negative space spars. In other example embodiments, the cast or molded components may be any other component, e.g., a beam, spar, rod, tube, or other component, of any other type of vehicle, machine, structure, device, or system. AlthoughFIGS.5A-5Ddepict an example wing as the cast or molded component, the systems and methods described herein are not limited to wings or other components of aerial vehicles. The cast or molded component that may be at least partially formed using a thermally expanding mandrel505may be formed from a foam material composition, e.g., urethane or polyurethane foams, expanding foam material compositions, self-skinning foam material compositions, and/or one or more resin coatings, e.g., urethane resins, on surfaces of the component as described herein. In example embodiments, the foam material composition may be an expanding foam material that expands and cures substantially at room temperature. For example, the foam material composition may expand and cure without the application of heat to the foam material composition. In further example embodiments, the foam material composition may be a self-skinning foam material composition. For example, upon expansion of the foam material composition that results in a pressure increase at interfaces between the foam material composition and one or more surfaces of a molding tool and/or between the foam material composition and one or more surfaces of one or more mandrels inserted into the molding tool, an external skin may be formed on an exterior surface of the component and/or one or more internal skins may be formed on one or more interior surfaces of the component. Example foam material compositions may include four-pound foams, two-pound foams, or other types of foams. For example, a four-pound foam indicates an expansion ratio of approximately four pounds per cubic foot, a two-pound foam indicates an expansion ratio of approximately two pounds per cubic foot, etc. In some example embodiments, the foam material composition may be modified with the addition of water or other additives to affect the expansion ratio. For example, the addition of approximately five drops of water to approximately 160 grams of two-pound foam may increase the expansion ratio by approximately 30%. In some example embodiments, self-skinning foams may form skins on their exterior surface facing a molding tool, or interior surfaces facing mandrels, upon generation of at least approximately 10% overvolume pressure within a molding tool. In alternative embodiments, different percentages of overvolume pressure may be generated within the molding tool, e.g., approximately 5%, approximately 8%, approximately 12%, or approximately 15%, or within a range of approximately 5% to approximately 20% overvolume pressure. As described herein, one or more negative space spars formed by the one or more mandrels may reduce material usage and weight of the component, and interlaminar shear strength between the one or more skins and portions of the foam material composition may contribute to the increased structural strength of the component. The one or more resin coatings on surfaces of the component that may be at least partially formed using a thermally expanding mandrel505may be formed from urethane resins. In some example embodiments, the urethane resins may be modified with microspheres or other additives to reduce the weight of the urethane resins. For example, the microspheres may be hollow glass, plastic, or polymer microspheres or microbeads. In further example embodiments, the urethane resins may be modified with pigments or other coloring agents in order to form a component having a desired color. In example embodiments, the resin coatings may cure substantially at room temperature. For example, the resin coatings may cure without the application of heat to the resin coatings. In alternative embodiments, the resin coatings may be cured with the application of heat. For example, the resin coatings may cure at a faster rate upon application of heat as compared to the rate of curing substantially at room temperature. The one or more resin coatings may be applied to surfaces of a molding tool and/or one or more mandrels, and the one or more resin coatings may be cured, e.g., upon application of heat. For example, the molding tool and/or the one or more mandrels may be heated, e.g., placed in an oven, in order to cure the resin coatings. The curing of the resin coatings may at least partially overlap with the expansion and curing of the foam material composition. As a result, the resin coatings may form an external skin on an exterior surface of the component, and/or one or more internal skins on interior surfaces of the component. As described herein, one or more negative space spars formed by the one or more mandrels may reduce material usage and weight of the component, and interlaminar shear strength between the one or more skins formed by resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component. In further example embodiments, the molded component that may be at least partially formed using a thermally expanding mandrel505may be formed from component materials that are applied to, laid onto, or wrapped around the thermally expanding mandrel505. The component materials may include carbon fiber strips, carbon fiber tape, carbon fiber sheets, Kevlar, fiberglass, composites, other materials such as polymers, plastics, ceramics, or combinations thereof that are applied to, laid onto, or wrapped around the thermally expanding mandrel505, and the component materials may be cured upon application of heat and pressure. For example, the molding tool and/or the one or more mandrels may be heated, e.g., placed in an oven, in order to expand and cure the component materials. The thermally expanding mandrel505may expand upon application of heat, thereby expanding and applying pressure to the component materials within the molding tool, and may cure the expanded and compressed component materials due to the application of heat and pressure. In example embodiments including various combinations of foam material compositions, resin coatings, and/or component materials, the curing of the component materials and/or resin coatings may at least partially overlap with the expansion and curing of the foam material composition. As a result, the component materials and/or resin coatings may form an external skin on an exterior surface of the component, and/or one or more internal skins on interior surfaces of the component. As described herein, one or more negative space spars formed by the one or more mandrels may reduce material usage and weight of the component, and interlaminar shear strength between the one or more skins formed by component materials and/or resin coatings and portions of the foam material composition may contribute to the increased structural strength of the component. AlthoughFIG.5Ashows only a single thermally expanding mandrel505that may be used to form a component or a negative space spar within a component having any desired shape, any other number, combination, or arrangement of one or more mandrels505may be used to form any desired number, combination, or arrangement of components or negative space spars within one or more components. In example embodiments, the negative space spars may reduce material usage and weight of the component. FIG.5Bis a schematic, perspective view diagram of another thermally expanding mandrel505, according to an implementation. As shown inFIG.5B, the thermally expanding mandrel505may also be formed with one more surface features508,510, as described herein with respect toFIGS.1C-1E, formed on one or more portions of the exterior surface of the mandrel505. The various surface features may include corrugations510, ribs508, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. WhileFIG.5Bshows a particular number, combination, and arrangement of ribs508a,508b,508c,508dand corrugations510a,510b,510c,510d,510e, surface features of any type, number, or arrangement may be formed on one or more portions of the exterior surface of the mandrel505, to thereby form corresponding surface features on interior surfaces of components and/or negative space spars of components. The various surface features may be formed on one or more portions of the exterior surface of the mandrel505such that upon expansion of the thermally expanding mandrel505, e.g., due to the application of heat, corresponding surface features of any desired size, shape, combination, and/or arrangement may be formed on interior surfaces of components and/or negative space spars of components. In example embodiments, the various surface features included on one or more portions of the exterior surface of the mandrel505that form corresponding surface features on interior surfaces of components and/or negative space spars of components may further increase surface area of contact between one or more skins formed by self-skinning foam material compositions and/or resin coatings and portions of the foam material composition, thereby increasing interlaminar shear strength between the one or more skins and portions of the foam material composition to further contribute to the increased structural strength of the component. Moreover, with the inclusion of surface features on one or more skins of the component that increase structural strength, wall thicknesses between two or more skins of the component may be further reduced, thereby further reducing material usage and weight of the component while increasing structural strength. In still further example embodiments, as described herein with respect toFIGS.1C-1E, components that may be at least partially formed using a thermally expanding mandrel505may also include one or more support materials included at least partially within or attached or adhered to the foam material composition, the resin coatings, and/or the component materials. For example, the support materials may include a beam, rod, spar, or other structural support. In example embodiments, the support materials may be inserted into a molding tool and be surrounded by and molded into the foam material composition and/or the component materials. In other example embodiments, the support materials may be inserted into, attached to, or adhered to a cast or molded component after the foam material composition, the resin coatings, and/or the component materials have expanded and cured to form the component. The support materials may be formed of various types of materials, such as metals, plastics, woods, ceramics, polymers, or any other materials, or combinations thereof. In addition, the support materials may have any desired shape. In example embodiments, the component may include any type, shape, number, or arrangement of support materials. In example embodiments, the various support materials included at least partially within or attached or adhered to the foam material composition, the resin coatings, and/or the component materials may further contribute to the increased structural strength of the component. Moreover, with the inclusion of support materials as part of the component that increase structural strength, wall thicknesses of one or more portions of the component may be further reduced, thereby further reducing material usage and weight of the component while increasing structural strength. While the description with respect toFIGS.5A and5Bdescribes various aspects of the thermally expanding mandrel505individually, the various features described herein may be combined in various combinations. For example, a first portion of the exterior surface of the mandrel505may include corrugations510, and a second portion of the exterior surface of the mandrel505may include ribs508. In addition, a first portion of the exterior surface of the mandrel505may include ribs508, corrugations510, and/or other surface features, and a second portion of the exterior surface of the mandrel505may not include any surface features. Various other combinations of the various features described herein may also be included in the thermally expanding mandrel505. FIG.5Cis a schematic, cross-sectional view diagram of a thermally expanding mandrel505within a molding tool530at a first temperature, according to an implementation. As shown inFIG.5C, a thermally expanding mandrel505may be placed within a molding tool530a,530b. In example embodiments, the thermally expanding mandrel505may include component materials520applied or laid onto the exterior surface of the mandrel505. As described herein, one or more portions of the exterior surface of the mandrel505may include various surface features. Further, one or more portions of the molding tool530a,530bmay include various surface features, such as surface features that correspond to those included on the exterior surface of the mandrel505or other surface features, as desired. The thermally expanding mandrel505may be formed from a thermally expanding material composition, e.g., micronized rubber particles and gypsum plaster. As shown inFIG.5C, heat may not yet have been applied to the mandrel505and/or the molding tool530a,530b. For example, the mandrel505and/or the molding tool530a,530bmay be at a first temperature, e.g., room temperature or some other ambient temperature, at which the mandrel505has not expanded or has only minimally expanded. FIG.5Dis a schematic, cross-sectional view diagram of a thermally expanding mandrel505within a molding tool530at a second temperature, according to an implementation. As shown inFIG.5D, heat may have been applied to the mandrel505and/or the molding tool530a,530b. For example, the mandrel505and/or the molding tool530a,530bmay be at a second temperature, e.g., 150 degrees Fahrenheit, 250 degrees Fahrenheit, 300 degrees Fahrenheit, or any other elevated temperature, at which the mandrel505has expanded. Due at least partially to the difference in CTE between the thermally expanding material composition of the mandrel505and the materials of the molding tool530a,530b, the mandrel505may expand at a faster rate than the molding tool530a,530b. The expansion of the mandrel505may cause expansion of the component materials520applied or laid onto the mandrel505. In addition, the expansion of the mandrel505may cause application of pressure to the component materials520between the exterior surface of the mandrel505and interior surfaces of the molding tool530a,530b. Further, the application of heat and pressure to the component materials520, e.g., by the mandrel505and/or the molding tool530a,530b, may cause curing of the component materials520into a component having a desired shape. Further, various surface features included on the exterior surface of the mandrel505and/or on the interior surfaces of the molding tool530a,530bmay cause the formation of corresponding surface features on interior and/or exterior surfaces, respectively, of the component materials520. As described further herein, after completion of the formation of the component using the thermally expanding mandrel505, the temperatures of the mandrel505, molding tool530a,530b, and/or the component may be reduced to the first temperature, or some other handling temperature. Then, the component may be removed from the molding tool530a,530b, and the thermally expanding material composition of the mandrel505may be washed out of the component, e.g., using hot, pressurized water. In example embodiments in which the mandrel505is formed of micronized rubber particles and gypsum plaster, the expansion of the mandrel505due to application of heat may initiate at least partial breakage or fracturing of the micronized rubber particles from each other. In addition, the cooling of the mandrel505back to the first temperature, or some other handling temperature, may further cause breakage or fracturing of the micronized rubber particles from each other, at least partially due to their reduction in size as a result of cooling. Further, the application of hot, pressurized water may dissolve the gypsum plaster and cause additional, e.g., complete or nearly complete, breakage or fracturing of the micronized rubber particles from each other. In this manner, the micronized rubber particles and gypsum plaster may be washed out of the component using water. Moreover, the micronized rubber particles may be recycled, e.g., using a centrifuge or other filtering processes, and reused to form other thermally expanding mandrels. Furthermore, the gypsum plaster may also be recycled, e.g., using a plaster trap or other filtering processes, and also reused to form other thermally expanding mandrels. FIG.6is a flow diagram illustrating an example component formation process using a thermally expanding mandrel600, according to an implementation. The process600may begin by preparing a molding tool, as at602. For example, one or more release agents may be applied to the molding tool such that a cast or molded component may be removed from the molding tool upon completion of the process600. Further, the molding tool may be designed with various draft angles to facilitate removal of a cast or molded component from the molding tool. Moreover, the molding tool may include various surface features as described herein, in order to create corresponding surface features on exterior surfaces of the component, such as corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. The molding tool may be a single-part, two-part, or multi-part molding tool, and the molding tool may be formed from various materials, such as aluminum, carbon, steel, Inconel, other metals, ceramics, polymers, composites, or combinations thereof. The process600may continue by preparing a material composition for a thermally expanding mandrel, as at604. The material composition may be a thermally expanding material composition including thermally expanding particles and binder material. As described herein, the thermally expanding particles may include micronized rubber particles or powder, silicone rubber microspheres, particles, or powder, or other thermally expanding particles having a CTE that is higher than a CTE of materials of the molding tool. In addition, the binder material may be a water-soluble binder material, such as gypsum plaster. The relative proportions of thermally expanding particles and binder material may be determined based at least in part on desired CTE of the mandrel, processes to be used to form the mandrel, physical properties of the mandrel at various temperatures, physical properties of the molding tool at various temperatures, geometry of the component to be formed using the mandrel, and/or other factors related to the mandrel, mandrel formation, and/or component formation processes. The thermally expanding material composition may expand upon application of heat, contract upon removal of heat, break apart or fracture at least partially during or after use, and/or be removable or dissolvable using hot, pressurized water. The process600may then proceed by forming the mandrel using the material composition, as at606. The mandrel may be formed in any desired shape using various processes and methods. For example, the mandrel may be formed by molding or casting the material composition in a molding tool, 3-D printing, machining, other processes, or combinations thereof. In further example embodiments, one or more release agents may be applied to the mandrel such that the material composition may be removed from the component upon completion of the process600. Further, the mandrel may be designed with various draft angles to facilitate removal of one or more portions of the mandrel from the component. Moreover, the mandrel may include various surface features as described herein, in order to create corresponding surface features on interior surfaces of the component, such as corrugations, ribs, striations, protrusions, bumps, indentations, dimples, or other surface features, or combinations thereof. The process600may then continue to apply component material onto the mandrel and/or the molding tool, as at608. As described herein, the component material may include carbon fiber strips, tape, sheets, or other layers, other types of materials, or combinations thereof, that may be applied to, laid onto, or wrapped around the mandrel and expanded and cured upon application of heat and pressure within the molding tool. In example embodiments, the component material may be applied in one or more layers having various thicknesses, as desired, that may affect the resultant weight and/or strength of the component. The process600may continue by inserting the mandrel into the molding tool, as at610. For example, the thermally expanding mandrel may be placed in position within or relative to the molding tool, in order to form the component and/or a negative space spar within the component at a particular position and/or with a particular arrangement. The process600may then proceed by closing the molding tool, as at612. The closed molding tool may substantially seal the component material within the molding tool and/or around the mandrel. Then, the process600may continue by applying heat to the closed molding tool and/or the mandrel to thermally expand the mandrel and expand, compress, and cure the component material, as at614. For example, heat may be applied by placing the molding tool and/or the mandrel in a curing oven. Alternatively, heat may be applied to the molding tool and/or the mandrel by other methods, such as by direct application of heat to one or more portions of the molding tool and/or the mandrel. Upon application of heat, the mandrel formed of a thermally expanding material composition having a higher CTE than a CTE of the materials of the molding tool may expand at a faster rate than the molding tool. The expansion of the mandrel may correspondingly cause expansion of the component material applied or laid onto the mandrel. In addition, the mandrel may apply pressure to the component material against interior surfaces of the molding tool, thereby compressing and curing, e.g., by application of heat and pressure, the component material into the component. In example embodiments, various combinations of curing temperatures, curing pressures, and/or curing times may be determined and used based at least in part on properties of the component material. After completion of the expansion, compression, and curing of the component material, the process600may continue by stopping the application of heat to the closed molding tool and/or the mandrel, as at616, and by opening the molding tool, as at618. Then, the process600may proceed by removing the component from the molding tool, as at620, and by washing out the mandrel, or the thermally expanding material composition of the mandrel, from the component, as at622. For example, the thermally expanding material composition may be water-soluble such that the expanding material composition may be washed out of the component using hot or warm, pressurized water. As described herein, in example embodiments, the expansion of the mandrel due to application of heat may initiate at least partial breakage or fracturing of the thermally expanding particles from each other. In addition, the cooling of the mandrel may further cause breakage or fracturing of the thermally expanding particles from each other, at least partially due to their reduction in size as a result of cooling. Further, the application of hot, pressurized water may dissolve the binder material and cause additional, e.g., complete or nearly complete, breakage or fracturing of the thermally expanding particles from each other. In this manner, the thermally expanding particles and water-soluble binder material may be washed out of the component using water. Then, the process600may continue by recycling the material composition for the mandrel, as at624. For example, the thermally expanding particles may be recycled, e.g., using a centrifuge or other filtering processes, and reused to form other thermally expanding mandrels. Furthermore, the binder material may also be recycled, e.g., using a plaster trap or other filtering processes, and also reused to form other thermally expanding mandrels. The process600may then end, as at626. In example embodiments in which the component formed by the process600also includes one or more foam material compositions, one or more resin coatings, and/or one or more support materials, as described herein, the process600may also include one or more of the steps described with respect to processes200,400, such as preparation of the foam material composition, injection of the foam material composition, preparation of the resin coating material, application of the resin coating material, insertion of one or more support materials, expansion and curing of the foam material composition, and/or curing of the resin coating material, as described herein. The component formed using a thermally expanding mandrel by the process600described herein may include any desired shape, form, or geometry. Because the expanding material composition of the thermally expanding mandrel may be broken down, dissolved, and washed out of the component, a conventional mandrel having a fixed shape need not be removed from the interior of the component after completion of the process. Furthermore, the component may be formed with various surface features on interior surfaces and/or exterior surfaces based at least in part on corresponding surface features included on the thermally expanding mandrel and/or the molding tool. Moreover, while the example embodiments have been described herein in the context of a single, thermally expanding mandrel used to form a component, multiple thermally expanding mandrels of any desired shapes, forms, or geometries may be used together to form components with complex shapes, forms, and geometries including one or more negative space spars in any combination or arrangement. Further, any of the various features described herein with respect to any of the figures and example embodiments may be combined in various combinations. All or portions of the process600described herein may be performed by automated or semi-automated machinery that is controlled and/or programmed to perform one or more steps of the process600. For example, automated or semi-automated machinery or robotics may prepare the molding tool. Further, automated or semi-automated machinery or robotics may prepare the material composition for the thermally expanding mandrel, may form the mandrel using the material composition, and/or may apply component material to the mandrel and/or the molding tool. In addition, automated or semi-automated machinery or robotics may insert the thermally expanding mandrel into the molding tool, and/or may close the molding tool. Further, automated or semi-automated machinery or robotics may apply heat to the closed molding tool and/or the thermally expanding mandrel to expand the mandrel, and compress and cure the component material, and/or may stop applying heat to the closed molding tool and/or the thermally expanding mandrel. Moreover, automated or semi-automated machinery or robotics may open the molding tool upon completion, and/or may remove the component from the molding tool. Furthermore, automated or semi-automated machinery or robotics may wash out the thermally expanding mandrel from the component, and/or may recycle the thermally expanding material composition for the mandrel. Each process described herein may be implemented by various architectures described herein or by other architectures. The processes are illustrated as a collection of blocks in a logical flow. Some of the blocks represent operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer readable media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The computer readable media may include non-transitory computer readable storage media, which may include hard drives, floppy diskettes, optical disks, CD-ROMs, DVDs, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, flash memory, magnetic or optical cards, solid-state memory devices, or other types of storage media suitable for storing electronic instructions. In addition, in some implementations, the computer readable media may include a transitory computer readable signal (in compressed or uncompressed form). Examples of computer readable signals, whether modulated using a carrier or not, include, but are not limited to, signals that a computer system hosting or running a computer program can be configured to access, including signals downloaded through the Internet or other networks. Finally, the order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process. Additionally, one or more of the operations may be considered optional and/or not utilized with other operations. Those skilled in the art will appreciate that, in some implementations, the functionality provided by the processes and systems discussed above may be provided in alternative ways, such as being split among more software modules or routines or consolidated into fewer modules or routines. Similarly, in some implementations, illustrated processes and systems may provide more or less functionality than is described, such as when other illustrated processes instead lack or include such functionality respectively, or when the amount of functionality that is provided is altered. In addition, while various operations may be illustrated as being performed in a particular manner (e.g., in serial or in parallel) and/or in a particular order, those skilled in the art will appreciate that, in other implementations, the operations may be performed in other orders and in other manners. The various processes and systems as illustrated in the figures and described herein represent example implementations. The processes and systems may be implemented in software, hardware, or a combination thereof in other implementations. Similarly, the order of any process may be changed, and various elements may be added, reordered, combined, omitted, modified, etc., in other implementations. From the foregoing, it will be appreciated that, although specific implementations have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the appended claims and the features recited therein. In addition, while certain aspects are presented below in certain claim forms, the inventors contemplate the various aspects in any available claim form. For example, while only some aspects may currently be recited as being embodied in a computer readable storage medium, other aspects may likewise be so embodied. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description is to be regarded in an illustrative rather than a restrictive sense. | 87,445 |
11858180 | DETAILED DESCRIPTION FIG.1shows a schematic view of the steps of an embodiment of the method according to the invention. The reference1000indicates an example of the steps performed for preparing the piece to be consolidated, before entering the piece in a cavity of a container. The reference2000indicates an example of the steps performed after this preparing1000. The steps1000and2000are made up by operations and movements that are preferably executed sequentially, as indicated by the sense of the arrows. Those operations and movements can be executed in a manual, semi-automatic or fully-automatic manner. Semi or fully automatic operations or movements can be driven by direct feedback from, e.g. sensors, user input or self-learning algorithms, or by preconfigured settings. In the example ofFIG.1, first, the piece is manufactured in step110. The piece according to the invention is additively manufactured e.g. by automated filament winding, automated tape laying (ATL), automated fiber placement (AFP) or 3D printing. Elements may have been embedded into the piece during the additive manufacturing process (for example, the process may have been stopped mid-way to embed a foam core into the partly done additively manufactured piece, after which the additive manufacturing process is resumed to enclose the foam). The piece to be consolidated with the method according to the invention can be made by homogenous and/or composite material. An example of this piece is given inFIG.2. FIG.2shows a perspective view of an embodiment10′ of an additively manufactured piece10to be consolidated. In this example, the piece is a bracket10′ having a curved shape and two ends4, each end4comprising a hole3. The bracket10′ comprises an outer lateral surface (perimeter)1and a core2. In one embodiment, the lateral surface1and the core2are made of a homogeneous material, e.g. plastic. In another embodiment, the lateral surface1and the core2are made of a heterogeneous material, e.g. carbon fiber reinforced plastic. In another embodiment, the lateral surface1and the core2are made of different materials: for example, the lateral surface1can be made of hybrid plastic and the core2by carbon fiber reinforced plastic, or vice-versa. With the method according to the invention it is possible also to consolidate and at the same time assemble or connect together multiple pieces. At least one and possibly all those pieces is (are) additively manufactured. In other words, the pieces can be combined before consolidating them. During their combination, connection means can be used (if possible) for maintaining the relative position between the pieces so as to handle the composed piece. Connection means comprise mechanical locks, adhesives, fasteners, knitting, etc. In one preferred embodiment, the method according to the invention not only allows the consolidation of at least one of those pieces, but it allows also the connection of the combined pieces together so that they form a monobloc piece. Those pieces can be chemically linked or cannot be chemically linked. An example of pieces that can be chemically linked is given inFIGS.3A and3B. FIGS.3A and3Bshow a perspective view of two embodiments of additively manufactured pieces10. They are the components of a two-components bracket, visible inFIG.3C. The first component10″ of the bracket, visible inFIG.3Ais substantially planar and comprises two holes3and an aperture5arranged for receiving the protruding part6of the second component10″′ of the bracket visible inFIG.3B. The second component10′″ of the bracket has a complex geometry, comprising a base7and a protruding part6having a complex lateral shape. It comprises also a hole3. The first component10″ can be manufactured with 3D printing, e.g. by stacking layers in the z direction, each layer belonging to the x-y plane. The second component10″′ can be manufactured with 3D printing e.g. by stacking layers in a plane perpendicular to the x-y plane, e.g. the x-z plane or the z-y plane. Therefore, for obtaining the bracket represented inFIG.3C, it is necessary to assemble the first component10″ ofFIG.3Awith the second component10″′ ofFIG.3B, as they are manufactured in different planes. In other words, manufacturing the bracket ofFIG.3Cas a monobloc additively manufactured piece without assembling different parts can be difficult or even impossible. With the method according to the invention it is possible also to assemble or connect together multiple pieces that cannot be chemically linked. At least one and possibly all those pieces is(are) additively manufactured. The method according to the invention establishes a mechanical lock and/or provides adherence between those pieces. An example is givenFIGS.4A and4B. FIG.4Ashows a perspective view of four different embodiments of additively manufactured pieces to be assembled. Once assembled, they form a pressure vessel, visible inFIGS.4B and4C. A metal core10iv, e.g. a printed metal core or a core manufactured with other techniques, is encapsulated by two printed carbon fiber end cups10v, as illustrated by the arrows F1. A printed carbon fiber hollow cylinder (sleeve)10viis then inserted on the encapsulated metal core, as illustrated by the arrows F2. The obtained pressure vessel is illustrated in inFIG.4B. In this example, the pieces10iv,10vand10vicannot be chemically linked. As will be discussed, the method according to the invention allows the metal core10ivto adhere to the two printed carbon fiber components10vand the printed carbon fiber components10vand the printed carbon fiber hollow cylinder10vito cohere. If the additively manufactured pieces10must be assembled as in the examples ofFIGS.3B and4B, the method comprises the step120of combining at least one additively manufactured piece with another piece, possibly with another additively manufactured piece. It is also sometimes necessary or useful to insert or encapsulate one or more functional elements in an additively manufactured piece or in a group of combined pieces, comprising at least one additively manufactured piece. Non-limitative examples of such functional elements comprise:elements for enhancing a function as e.g. fasteners, sensors, actuators, cables, etc.;elements for enabling load introduction as e.g. inserts; and/orelements for increasing mechanical properties, e.g. metal, ceramic, glass, fiber reinforced plastic rods and/or other structures. These elements may be partially inserted and/or may be encapsulated only when multiple pieces are assembled. In one preferred embodiment, those elements have a melt point higher than the process temperature. It is also sometimes necessary or useful to embed one or more structural core in the additively manufactured piece, e.g. a lattice, foam and/or honeycomb core. In one preferred embodiment, the core has a melt point higher than the process temperature. The step of embedding a core is illustrated by the reference130inFIG.1. The step of adding elements or inserts is illustrated by the reference140inFIG.1. Preparing the piece10can include also to create in the piece10air channels (not illustrated) so as to accommodate air leaving the piece10during its consolidation. Preparing the piece10can include a step during which the outer surface (shell) of the piece10is made crack free as much as possible, as to avoid the curable material to enter in the cracks and then break the piece10during the curing step. In one preferred embodiment, this step comprises to manufacture the piece10so that at least a portion of its outer surface is made of a crack free material only, e.g. of plastic. In another embodiment, this manufacturing step is made as proper as possible, so that the bonding between the layers of the piece10is crack free as much as possible. In one embodiment, preparing the piece10can include a step during which the piece10is put into an air tight bag, which covers the outer shell of the piece10. This air tight bag may be connected to the ambient environment outside the cavity through a channel (e.g. a tube), as to accommodate air escaping from the bag that escapes from the piece10during the consolidation step. The air tight bag can in this embodiment also be held under vacuum by generating an under pressure through the channel, which in this case would be connected to a vacuum pump. In another embodiment, at least part of the piece10is covered by and/or wrapped into an airtight seal/sheet. The presence of an air tight bag, seal and/or sheet allows to avoid the curable material30to enter in possible cracks in piece10and then break the piece10during the curing step. If combined pieces must be consolidated, after their combination junction point(s) or area(s) of two or more pieces are seal-tight, so as to avoid curable material to enter in between and the separate pieces. In one embodiment, the pieces before the curing step are at least partially covered by a crack free material (e.g. plastic), e.g. by sheet a crack free material or by spraying a crack free material on at least a portion of the outer surface of the pieces. Once the piece (or the combined pieces)10has been prepared, it is placed in a cavity of a container, e.g. in the cavity21of the container20illustrated inFIG.5A. In this example, the container20has a lateral surface26and a bottom surface28, defining the cavity21. A lid22, visible inFIG.5B, can be connected to the lateral surface26so as to close the cavity21and then the container20by fixation means (in this examples, screws40cooperating with holes14in the container20). The illustrated container20is substantially cylindrical, but any other shape or size of the container can be imagined. It can also have a variable volume. The container20is (at least partially) made of a thermally conductive material. In one preferred embodiment, it is metallic. The container20is arranged so as to support high pressures, i.e. pressures belonging to the range 1 Bars-10 Bars or higher, typically in the range 3 Bars-7 Bars. In one embodiment, part of the container20and/or its lid22can be perforated so as to accommodate during the consolidation step air exchange with the surrounding environment. In one preferred embodiment, those (not illustrated) perforations should not inhibit pressure build-up in the container20. In one preferred embodiment, the method according to the invention comprises the step of positioning the piece(s)10in the cavity21by using (not illustrated) positioning means. Examples of positioning means comprise spacers, holders, pins, etc. In one preferred embodiment, illustrated inFIGS.7A and7B, the bottom portion of the container20comprises a build platform28for the piece10, on which the piece has been additively manufactured. In another embodiment, the lid and/or the container lateral frame comprise this build platform. This has the advantage that, after the piece10is additively manufactured on the build platform28, the build platform28supporting the piece10can be directly used for both placing the piece in the cavity21and for at least partially closing the opening of the cavity21through which the piece10has been entered in the cavity21. In another embodiment the build platform supporting the piece is used only for placing the piece, another lid allowing to close the opening of the container. In one embodiment, the piece10is automatically stuck on the build platform28during the manufacturing step110. In another embodiment, connection means (as adhesive, glue, etc.) are used for connecting the build platform28to the piece10. The cavity21is at least partially filled with a liquid or semi-liquid material30, before or after positioning piece10into the cavity21, so that the material30directly contacts at least an outer portion of the piece10, by perfectly surrounding or enveloping this portion. In one preferred embodiment, it completely surrounds the piece10. In one embodiment, air channels (not illustrated) are added to or created in the curable material30, before or after the curing step, as to accommodate air leaving the piece10during the consolidation step to leave the cavity21, or at least so as to not trap air leaving the piece10between the monobloc mould and the piece10. This improves the compaction and fusion of the piece10. According to the invention, this material30is curable, has a maximum operating temperature higher than the lower melt temperature of the piece to be consolidated, and has a positive relative thermal coefficient of expansion. In one preferred embodiment, it does not degrade, soften, deteriorate, melt and/or burn with high pressures, i.e. pressures belonging to the range 1 Bars-10 Bars or higher. Examples of this material30comprises rubber (e.g. natural or synthetic), silicone, elastomer, thermoplastic, thermoset and/or starch based elastomers or plastics (biodegradable). If the cavity21is not completely filled by the material30, a filler (not illustrated) can be used for filling the remaining volume or vice versa. In one preferred embodiment, the filler comprises a previously cured material30that has been pre-processed before re-using it as a filler, e.g. so as to reduce it in granulate. In another embodiment, sand, plastic and/or metal particles can be used as a complementary or alternative filler and may be added to the material30before filling the cavity21. The placing step and the filling step form the potting step210ofFIG.1. The cavity21is then sealed, e.g. by closing the open part of the container20with a lid22, visible inFIG.5B(step220). In another embodiment, the sealing step is performed after the curing step. Mechanical, electrical and/or magnetic forces can be used so as to restrict expansion of the cured material30to build up pressure in the cavity21. The material30is then cured, so that the cured material30restricts the movement of the piece20in the cavity21. Depending on substance's curing properties, its curing can be performed by, e.g. time, exposure to gas(es), exposure to UV and/or to heat. Since the curable material is cured, it becomes solid. The container20(and then its cavity21) is then heated to a temperature equal or higher than the lower melt temperature of the piece to be consolidated, but lower than the maximum operating temperature of the material. In one embodiment, the heating step is 30 seconds to 5 minutes long. In one preferred embodiment, the container20comprises or is connected to means (not illustrated) for controlling the time and the temperature and possibly the pressure in the cavity21during the heating step. The heating step is illustrated inFIG.1by the reference240.FIG.8shows an example of the variation in time of the temperature of the container20(and then of the cavity21). During the heating step240, the heat causes expansion, reduction, relative movement and/or deformation of the cured material30and of the piece10, depending on relative heat expansion coefficient and other physical properties. Since the material30has a positive thermal coefficient of expansion, once the cured material is heated, it expands in the cavity21so as to generate a substantially homogenous pressure on the piece10. During this expansion, the cured material remains solid and maintains close control of the geometry of the piece10. In one preferred embodiment, the container20comprises or is connected to means for controlling the pressure in the container during the heating step. In one preferred embodiment, the container comprises or is connected to a piston so as to generate a controlled force or a controlled pressure on the piece during the heating step. In one embodiment, illustrated inFIG.6or inFIGS.7A and7B, the piston24is or is contained in a lid of the container, closing its opening. AlthoughFIGS.7A and7Billustrate the presence of means for controlling the pressure (as the piston24) with a build plate28, it must be understood that the presence of both features is not necessary. In one (not illustrated) embodiment, only the build plate28is present, the container being devoid of means for controlling the pressure of the cavity21. In the example ofFIG.8, the temperature T1is the melt temperature of the matrix of a composite additively manufactured piece10. In one example, T1belongs to the range 150° C.-400° C., for example T1=200° C. Once T1is reached, in general after about 15 minutes, thermoplastics melt and the piece10starts to consolidate (its air is pressed out, the piece is compacted) and fuse (more linkage between polymers). If there are combined composite additive manufacture pieces10in the cavity21, the pieces10start also to join together. According to the invention, the temperature is increased until reaching a process temperature equal or higher than the lower melt temperature of the piece to be consolidated, but lower than the maximum operating temperature of the material. To create more pressure on the piece(s)10, the temperature can be increased so to a process temperature T2, to expand cured material more. More pressure could mean also better consolidation. In one example, T2belongs to the range 200° C.-400° C., for example T2=250° C. In the example ofFIG.8, T2>T1. However, T2can be equal to T1. The process temperature T2is then maintained for a certain time interval (t3−t2), typically ranging from 30 minutes to 6 hours, so as to consolidate the piece10. The consolidation step is illustrated inFIG.1by the reference250. Consolidation takes place also due to the homogenous pressure that is applied to piece(s)10. This consolidation step250causes fusion (bonding), compaction and/or crystallization of or within piece(s)10and can result in air leaving the piece(s)10. For example, the method according to the invention allows the printed metal core10ivofFIGS.4A to4Cto adhere to the two printed carbon fiber components10vand the printed carbon fiber components10vand the printed carbon fiber hollow cylinder10vito cohere. According to the invention, during the heating step and during the maintaining step, the cured material30is an in-situ created monobloc mould for the piece10. It is the unique or sole mould in the cavity21and it is created in the cavity21, after the piece has been placed in the cavity21. In one embodiment, the material30filling the cavity can be a recycled material, i.e. a material that has been cured and that has been processed so as to come back to a liquid or semiliquid and still curable state. In the embodiment ofFIG.1, the consolidation step250is followed by a cooling step260. In one preferred embodiment, as visible inFIG.8, the step of cooling260comprises:cooling the container20(and then the cavity21) so as to reach a predetermined temperature T3; the time interval (t4−t3) typically ranges from 10 minutes to 2 hours;maintaining this predetermined temperature for a time interval (t5−t4); the time interval (t5−t4) typically ranges from 10 minutes to 1 hour;cooling the20(and then the cavity21) so as to reach the room temperature T0, within in general 1 hour. In one preferred embodiment, the piece comprises a matrix, e.g. a thermoplastic matrix, and this predetermined temperature T3is the glass transition temperature of the thermoplastic matrix. The difference in the coefficients of thermal expansion between the fibers and the (e.g. thermoplastic) matrix as well as the density change for semi-crystalline polymers may lead to residual stresses and warping. To prevent this and to relieve such residual stresses, the glass transition temperature T3is maintained for a time interval (t5−t4). This applies also for a piece made of a homogeneous material as well. In one example, T3belongs to the range 50° C.-350° C., for example T3=100° C. In one embodiment, the cooling rate during the time interval (t4−t3) is lower than the cooling rate after t5. Typical cooling rates range from 5° C./min to 50° C./min. In one preferred embodiment, the method according to the invention, after the cooling step, comprises the step of separating the consolidated piece(s) from the cured material. This step is illustrated inFIG.1by the reference270. In one preferred embodiment, the separating step comprises programming the path planning of a tool (e.g. a knife, a drill, a mill etc.) based on a 3D model file that was entered by the user and/or by probing and/or by scanning the piece prior to the curing step. Once the in-situ created mould is separated from the piece10, it can be optionally reused as a tradition mould or shredded to be used as granular filler. Optionally, post-processing may take place to clean, surface smoothen, paint and/or coat the consolidated piece(s)10(step280inFIG.1). REFERENCE NUMBERS USED IN THE DRAWINGS 1Outer lateral surface of the bracket2Core of the bracket3Hole4End of the bracket5Aperture of the first component of a bracket6Protruding part of the second component of a bracket7Base of the second component of a bracket10, Additively manufactured piece10′ to10viExamples of additively manufactured pieces10′ Bracket10″ First component of a bracket10″′ Second component of a bracket10ivPrinted metal core of a pressure vessel10vPrinted carbon fiber components10viPrinted carbon fiber hollow cylinder14Holes of the container20Container21Cavity of the container22Lid24Means for generating a controlled pressure in the container (piston)26Lateral surface of the container28Build plate30Curable material40Fixation means (screw)110Piece additive manufacturing step120Combining pieces step130Embedding core step140Adding elements step250Consolidating step260Cooling step270Separating step280Post-processing step1000Steps for preparing the piece to be entered in the cavity2000Steps after preparing the piece to be entered in the cavityF1, F2ArrowT0Room temperatureT1Matrix (thermoplastics) melt temperatureT2Process (consolidation) temperatureT3Matrix (thermoplastics) glass transition temperaturet1to t6Time1Reinforcement fibers2Matrix3Hole4Aperture10,10′ to10viAdditively manufactured piece14Holes of the container20Lateral surface of the container22Top surface of the container (lid)24Means for generating a controlled pressure in the container (piston)28Bottom surface of the container30Curable material40Fixation means (screw)110Potting piece step120Sealing step130Curing step140Heating step150Consolidating step160Cooling step170Separating step180Post-processing step210Piece additive manufacturing step220Combining pieces step230Embedding core step240Adding elements stepT0Room temperatureT1Matrix (thermoplastics) melt temperatureT2Process (consolidation) temperatureT3Matrix (thermoplastics) glass transition temperaturet1to t6Time | 22,691 |
11858181 | DETAILED DESCRIPTION In one aspect, the invention provides an ultrasound-detectable medical device comprising a polymer with microcavities dispersed in some or all of its body capable of providing improved visibility throughout some or all of its volume and under variable angles of insonationFIGS.1and2B. In some instances, the microcavities extend throughout the entire volume of the medical device. In other instances, the microcavities occupy a central region of the medical device. In additional instances, the space containing microcavities is surrounded by an outer layer of material without microcavities. In another aspect, the invention provides an ultrasound-detectable device wherein the diameter (microcavity size) ranges between 0.1 to 950 microns, and commonly between 50 to 350 microns. In some instances, the microcavity diameter exceeds 1,000 microns. In other instances, the microcavity diameter ranges from 10 to 500 microns. In additional instances, the microcavities exhibit diameters from 10 to 1,500 microns. In a further aspect, the invention provides an ultrasound-detectable device wherein the ideal volume to volume ratio of cavity space to polymer structures should be less than 60%, and is commonly between 12% and 50%. In some instances, the microcavities comprise between 30 to 50% of the volume. In other instances, the volume ratio of microcavities exceeds 60%. The ultrasound-detectable device contains microcavities. In one aspect of the device, the microcavities are composed of gas. In one aspect of the invention, the device is created via injection molding. In another embodiment, the device is manufactured by extrusion. In some aspects of the invention, microcavities are created by introducing gas into the polymer material prior to manufacturing, commonly through injection. In other aspects of the invention, microcavities are introduced during the manufacturing process, which can be performed by injecting gas into a mold either before, while, or after the polymer enters the mold. The microcavities may be composed of a variety of biocompatible gases. In some instances, super-critical CO2 is used, and in other instances, N2 is used. In another embodiment, the microcavities are created via a chemical reaction such that gas is released into the polymer. This may be accomplished with a foaming agent or other chemical processes. The gas may be activated by pressure or temperature changes in the manufacturing process. For a variety of reasons, including mechanical, material degradation, visibility, and manufacturing considerations, it is desirable to have the microcavities consume a region within the overall volume, rather than the entire device. In one embodiment, the region containing the microcavities is central to the device. In this embodiment, the region containing the microcavities is surrounded by a layer of polymeric or non-polymeric material that does not contain microcavities. In other embodiments of the device, this external layer, or “skin”, contains microcavities, though of a reduced density. In further embodiments, the region containing the microcavities resides on the top surface of the device (superficial towards the position of the ultrasound probe), while in other embodiments, the microcavity region resides on the bottom surface of the device. In one aspect of the invention, there is an outer layer of the device which is meant to maintain the structural integrity of the inner microcavity-containing region. This outer layer does not contain microcavities and thus provides a barrier protecting the inner region, especially from fluid flow, which could accelerate degradation and also negatively impact the ultrasonic visibility. In another aspect of the invention, the outer layer described has a smooth surface to minimize irritation and other adverse events to surrounding tissue or vessels once the device is implanted. Another aspect of the device relates to the visibility of the device under ultrasonic imaging. In this aspect, the device is used as an echogenic marker for ultrasound location in the human body. Some anatomic structures that can be marked using this device include: veins, arteries, soft tissue, urinary tracts, nerves, and ducts. The device enables location of any of these structures after implantation. In particular, the device gives the clinician knowledge of the spatial relationship between the ultrasound probe and anatomic structure, independent of the angle of insonation. The device enables locating the anatomic location repeatedly across many examinations after placement of the device. The size of the device ranges from 1 to 60 mm in length, 1 to 60 mm in width, and 1 to 40 mm in height. Some embodiments of the device represent curved, cradle-like structures. Other embodiments of the device are spheres, rectangles, cubes, plates, pellets, and discs. Some instances of when this device could be used are for: microvascular anastomoses, solid organ transplants, vascular bypass, and vascular access. In one embodiment of the device, it is comprised of one or more resorbable polymers selected from the group of: poly(lactic-co-glycolic acid) (PLGA), polylactide (PLA), polyglycolide (PGA), polyhydroxyalkanoate (PHA), polycaprolactone (PCL), polyethylene glycol (PEG) and copolymers thereof. In another embodiment of the device, it is comprised of one or more non-resorbable polymers selected from the group of: polycarbonate, polyetheretherketone, polypropylene, silicone, polyethylene, polyester, polybutylene terephthalate (PBT), polyvinyl chloride, polyethylsulfone, polyacryclate, polyetheretherketone, poly-p-xylylene (parylene), polytetrafluoroethylene, cyclo olefin, acrylonitrile butadiene styrene, polyeurethane, acrylonitrile styrene acrylate, acetals, polyetherimide, ethylene, chlorotrifluoroethylene, ethylene tetrafluoroethylene, polyvinyl fluoride, polyvinylidene difluoride, and polyhydroxybutyrate. In a further embodiment, the device is comprised of both resorbable and non-resorbable materials, which may be in the form of multiple sections with unique materials, a single blend of materials, or multiple sections of blended materials. In one aspect of the invention, the device is manufactured via a foaming process. Microcavities are introduced into the polymer by introducing a blowing agent. The blowing agent created the cellular structure of the microcavities. In one embodiment of the invention, the blowing agent is a physical blowing agent. In another embodiment, the blowing agent is a chemical blowing agent. An alternative way of generating the foam is using a solvent such as acetone. In addition to introducing the foaming agent, this invention describes injecting the polymer into a mold. An alternative way of producing the device is via extrusion. This invention describes a method for using the device where the device is first inserted into a patient, it is then detected using B-mode ultrasound during or after surgery, and the device is detected in multiple frames, representing different angles of insonation. The ultrasound user can leave the patient and return to find the device at a later time point. This is important because it is often desired to track anatomical or physiological features over a time horizon of multiple days or weeks, and sometimes months or years. This means that user needs to walk away from the patient, return to the patient, and easily locate the device. Another critical feature of the invention is the ability to detect the device using ultrasound from any angle of insonation. This is important because a non-expert is able to locate the marked site and use the visual information to achieve a desired angle or set of angles. The invention enables strong visibility in angles ranging from 25 degrees to 155 degrees from the skin surface. The microcavity feature of the invention provides the ability to visualize the device across such a broad range of insonation angles. Due to the geometry and microcavity feature of the device, the user is able to understand the angle of insonation. Therefore, the user can repeatedly match the same orientation upon each examination, generate the same image of the device, and thus compare anatomic or physiologic conditions reliably over time. Alternatively, the user can approach the device from a new orientation in each additional examination, though will have the geometric information from the device to make proper calculations to adjust for the new angle of insonation. The device should not be compromised at 40 degrees Celsius when in a dark and moist environment, such as human or animal tissue. Compromise includes but is not limited to geometric changes, mechanical deformation, degradation, or microcavity change. The device must maintain its original integrity for at least 72 hours in such conditions. The device must yield contrast when visualized using B-mode ultrasound between 1 cm and 5 cm deep from the surface of the skin. Examples Example 1. An ultrasound-detectable medical device made by extrusion. Specifically, a Nano 16 mm extruder was used with a GFA3-10-30 screw element at 270 mm. The extruder has four zones, each with individual temperature control, which ultimately lead to a die to achieve the desired geometry of the device. The zones were first preheated to 110, 140, 130 and 100° C. respectively. The pressure within the die ranged from 10-70 psi. The feeding rate of the polymer was 2.5 cc/min, and the screw speed fell between 75-100 rpm. The torque on the screw ranged from 1500-3000 Gm. The supercritical CO2 was injected at 200 psi with a flow rate of 20 cfh. When the extruded polymer left the die, it was cooled via an air jacket. In cases when it was desired to achieve variance along the extrusion axis, the device was laser cut once it cooled to room temperature using the air jacket. Example 2. An ultrasound-detectable medical device made by injection molding. The polymer was introduced into the mold via injection through the port. While the material was being injected into the mold, CO2gas was simultaneously injected to provide microbubbles. In another example, the CO2was introduced into the material prior to injection into the mold. Once the material filled the mold, the mold was released via its pins, the part was removed, and the process was repeated. | 10,375 |
11858182 | In the figures similar components and parts are indicated by the same reference numerals. InFIG.1an embodiment of a press for manufacturing a sandwich panel is indicated in its entirety by reference numeral10. The press10comprises two press plates12and14respectively, that can be displaced with respect to each other. E.g. the lower press plate14may have a fixed position, while the upper press plate12is vertically displaceable—as indicated by an arrow—e.g. by a hydraulic cylinder (not shown). Each press plate12,14has at least one internal flow channel16in its body, which extends from an inlet18to an outlet20. Typically a number of internal flow channels are distributed in the body without impairing the press plate strength beyond a critical value in relation to the pressures to be exerted during operation. The inlet18is connected to outlet21of a heater22by means of fluid supply line24. The heater22is configured to deliver pressurized hot water, e.g. using a boiler and pump (not separately shown). The outlet20is connected to the inlet25of the heater22via a fluid return line26. Together the heater22, the fluid supply line24, internal flow channels16and fluid return line26are in fluid communication with each other and form a circulation loop used for supplying heat to the press plates. Appropriate valves28and30are provided in the fluid supply line24and fluid return line26. As schematically shown in this embodiment the fluid return line26has a branch line32provided with an expansion valve34allowing to relief pressure and thereby generating steam from the pressurized hot water present in the flow channels16of the press plates12and14, wherein heat used for the conversion of water into steam is withdrawn from the press plates thereby cooling the press plates and as a result the sandwich structure. The steam generated is condensed in condenser36for recovery of heat from the steam. The condensate (water) may be returned to the heater22. A temperature controlled water source38having an outlet39, such as a tap or tank, is in fluid communication with the inlet18of the internal flow channel16for relatively slow cooling of the respective press plate12,14via a water supply line40. The outlet20is connected to the heater22, in this case to the fluid return line26to make-up for the water lost to the steam generation. Via water return line41water may also be cycled back to the water source38. A control device42such as a PC or PLC, having a processor44and memory46controls the operation of the press10, including opening and closing thereof, the conditions like temperature, pressure and flow rates of the supplied hot pressurized water, steam for (pre-)heating and initial cooling and of the temperature controlled water for further cooling and the associated equipment, like heater(s), control valves, expansion valves and venturi injectors. A starting structure (shown two-dimensionally) is indicated by reference numeral50and comprises a core layer52between skins54and56. In this embodiment the core layer52is composed of a thermoplastic comprising a physical blowing agent. The skins54and56are advantageously glass-fibre reinforced thermoplastic layers, wherein preferably the thermoplastic is the same as the one in the core layer52. The starting structure50is placed on the pre-heated lower press plate14in a fitting manner at its periphery, such that lateral (horizontal) expansion/foaming is prevented. The press plates12and14have been preheated to the foaming temperature, depending on the thermoplastic used, such as in the range of 170-190° C. The press10is closed such that both press plates12and14contact the starting structure50. Closing of the press is performed fast in order to prevent premature and uncontrolled foaming of the core layer52before pressure is applied by the press plates12,14. When a homogeneous foaming temperature (above the boiling temperature of the physical blowing agent) of the starting structure50is obtained, the distance between the press plates12,14is increased in a controlled manner, such that the skins54,56maintain their contact with the respective press plate12,14and thus pressure is exerted. Once the distance has increased to a predetermined value thereof and thus the starting structure, in particular the core layer thereof, has foamed to the corresponding predetermined thickness, flow of hot pressurized water through the flow channels16is interrupted and cooling is started by operating the expansion valve34and cooling is continued until a predetermined lower temperature, such as in the range of 110-150° C., has been achieved. At this temperature the effect of cooling by conversion into steam is less eminent and subsequent cooling of the press plates12,14is performed by water from water source38with a controlled temperature in the range of 40-90° C. in order to cool to the sandwich panel to a temperature around 80-95° C. at which foaming does not occur anymore. Further cooling down to ambient temperature can be performed in the press10by circulating water derived from source38having a lower controlled temperature through the press plates12,14. In case of a chemical blowing agent the press is heated to a temperature above the decomposition temperature of the chemical blowing agent. Typically the press plates12and14are pre-heated to a temperature well above the melt temperature or melting range of the thermoplastic used and above the decomposition temperature of the chemical blowing agent. Alternatively the press plates12and14are pre-heated to a temperature below the melting point of the thermoplastic to be foamed and thus also below the decomposition temperature of the chemical blowing agent, which is higher than said melting temperature. After closing the press10the temperature of the starting structure is further raised by heating the press plates12,14to a temperature above the decomposition temperature. After decomposition of the blowing agent, the structure is quickly cooled to an appropriate temperature above the melting point/range of the thermoplastic by interrupting the flow of hot pressurized water through the flow channels16and cooling is started by operating the expansion valve34and cooling is continued until the predetermined lower temperature above the melting temperature of the thermoplastic is reached. When the starting structure still under pressure has reached a homogeneous temperature just above the melting temperature of the used thermoplastic in the core layer, the distance between the press plates12,14is increased in a controlled manner, such that the skins54,56maintain their contact with the respective press plate12,14and thus pressure is exerted. Once the distance has increased to a predetermined value thereof and thus the starting structure, in particular the core layer thereof, has foamed to the corresponding predetermined thickness, cooling is re-started by operating the expansion valve34and cooling is continued as explained hereinabove. The intermediate cooling from the decomposition temperature to the melting temperature of the thermoplastic may be omitted. Then foaming is performed at a relatively high foaming temperature. FIG.2shows a second embodiment of a press according to the invention, which is similar to that ofFIG.1, except that the heater22is configured for generating steam and the press10is heated by steam. In order to fill the internal flow channels16with hot pressurized water prior to cooling, a venturi-connection80between a further hot pressurized water source82and the inlet18is provided, typically at each inlet of an internal flow channel16. InFIG.1a vacuum pump90controlled by the control device42is via conduits provided with control valves, fluidly connected to the flow channels16, which pump90can be operated to reduce the pressure the internal flow channels16, if appropriate. This arrangement can also be incorporated in the embodiment ofFIG.2. | 7,942 |
11858183 | DETAILED DESCRIPTION OF THE INVENTION With reference toFIGS.1to3, reference numeral1wholly indicates a support template for moulds for sports helmets, in particular cycling helmets, in accordance with the present invention. As shown inFIGS.1to3, the support template1is provided with at least one frame2, optionally substantially square in shape, preferably substantially rectangular, which comprises at least one support portion3for the support of at least one component C1, C2of a helmet to be obtained by means of a moulding or co-moulding process, placed on a first side2aof the frame2. The support portion3is advantageously configured to keep the respective component C1, C2of the helmet to be obtained according to a predetermined position inside a respective mould S. In detail, the support portion3of the frame2of the support template1comprises at least one support surface4, preferably two support surfaces4placed in a mirroring manner with respect to a median plane of the frame2. Each support surface4of the support portion3is provided with corresponding support projections4a(FIG.1), preferably consisting of corresponding support blocks, for the rest of a first component C1of the helmet to be obtained, preferably a lower ring C1of the shell of the helmet, optionally according to a position that is spaced apart from the respective support surface4. The support portion3also comprises, for each support surface4, at least one support element5, preferably at least partially arched. Each support element5is engageable with the respective support surface4transversally with respect to the latter by means of corresponding engaging pins5a(FIG.1). As shown inFIG.2, each support element5is advantageously provided with at least one first rest portion5bfor the lateral rest of the first component C1of the helmet to be obtained and with at least one second rest portion5cfor the rest of a second component C2of the helmet to be obtained, preferably the upper shell of the helmet itself. Advantageously, each support element5comprises at least one support projection5d(FIG.2), preferably two placed at opposite ends of the respective support element5, responsible for supporting at least one insert I (FIG.1) of the helmet to be obtained, in particular a respective “clip”, according to a predetermined position inside the mould and at least one support appendage5efor the rest of the second component C2of the helmet to be obtained. Advantageously, each support element5is provided with at least two structural portions6(FIG.1) removably engageable with one another by means of corresponding intermediate coupling elements (not visible in the attached figures). Preferably, the intermediate coupling elements of the structural portions6of each support element5allow the disengagement of one structural portion6with respect to the other by means of at least one relative rotation movement thereof. Always with reference toFIGS.1and2, the support portion3of the frame2also comprises at least one support protrusion7that extends transversally from the first side2aof the frame2in proximity to at least one of the support surfaces4. In detail, it is preferable for the support portion3of the frame2to comprise a plurality of support protrusions7that extend transversally from the first side2aof the frame2between the two support surfaces4, preferably substantially parallel to one other. Between the support protrusions7of the support portion3there is at least one central protrusion7athat lays substantially on a median plane of the frame2. Preferably, the support portion3comprises two central support protrusions7athat both lay substantially on the same plane as the frame2and at least two lateral protrusions7barranged between a respective support surface4and the central support protrusions7a. The central support protrusions7aeach have a substantially square or polygonal profile, whereas the lateral protrusions7beach have a circular or rounded profile with a reduction in section towards a free end thereof. As shown inFIGS.1to3, the support protrusions7extends from a substantially ring-shaped base plate8, which is removably engageable with the frame2between the support surfaces4of the latter. In this way, the engagement or disengagement of the support protrusions7with/from the frame2can be carried out by the application or removal of the base plate8through an action of an operator. Always with reference toFIGS.1to3, the frame2of the support template1comprises at least one centring portion9arranged to engage a respective centring seat (not visible in the attached figures) made on the respective mould so as to allow the correct alignment between the support template1and the mould itself in order to position the components C1, C2of the helmet to be obtained according to a predetermined position and centred inside the mould. Advantageously, the centring portion9comprises a plurality of centring pins9aeach arranged to engage a respective centring opening made on the respective mould responsible for moulding the helmet to be obtained. In detail, the centring portion9comprises four centring pins9aplaced in pairs at opposite ends of the frame2of the support template1. The frame2also comprises at least one grip portion10, preferably two, to allow the manual engagement of the support template1by an operator. In particular, each grip portion10comprises at least one bar, preferably cylindrical, which extends inside the footprint of the frame2in proximity to a respective support surface4. Another object of the present invention is a moulding process of a sports helmet in particular a cycling helmet. The moulding process comprises a step of positioning at least one component and/or insert C1, C2, I inside an open mould according to a predetermined position. In detail, the positioning step provides for the simultaneous positioning of all of the components C1, C2and the inserts I of the helmet to be obtained inside the open mould according to a predetermined position. The positioning of the aforementioned components C1, C2and of the inserts I of the helmet to be obtained is advantageously carried out by using the support template1described above. In particular, the positioning step firstly provides for the arrangement of the support template1. Then, at least one component C1, C2of the helmet to be obtained, preferably all of the components C1, C2and the inserts I that must be bound to the polystyrene base body are appropriately placed on the support template1according to predetermined positions. Once all of the components C1, C2and the inserts I of the helmet to be obtained have been arranged on the support template1according to the respective predetermined positions, the support template1is engaged with the open mould so that the components C1, C2and the inserts I are correctly positioned and centred in the mould itself. The engagement of the support template1with the respective mould is carried out through the centring pins9athat insert into corresponding centring openings made in the corresponding mould. Once the support template1has been engaged with the corresponding mould, one or more blocking mechanisms of the mould are actuated so as to block the components C1, C2and the inserts I of the helmet to be obtained inside the mould itself together with the support elements5of the support template1. Thereafter, by acting directly on the grip portions10of the support template1the latter is removed from the corresponding mould. Since the components C1, C2and the inserts I of the helmet to be obtained are blocked together with the support elements5of the support template1inside the mould, they consequently disengage from the latter remaining in the mould. The mould is hermetically closed to allow the usual moulding process at the end of which a helmet is obtained that is provided with the components C1, C2and the inserts I initially arranged on the support template1. Once the moulding process is finished, the support elements5are easily removed from the base body of the helmet carried out by rotating one structural portion6with respect to the other. The support template1for moulds for sports helmets, in particular cycling helmets, and the relative moulding process described above solve the problems encountered in the prior art and achieve important advantages. First of all, the support template1and its use in the moulding process of helmets substantially simplifies the latter since it allows and ensures the correct positioning of all of the components C1, C2and/or the inserts I of the helmet to be obtained inside the mould. It should also be considered that the use of the support template1substantially speeds up the positioning operations of the components C1, C2and/or of the inserts in the mould, since the operator, once the latter have been arranged on the support template, must only engage it with the mould and actuate the respective blocking mechanisms thereof. This ease and simplicity in the insertion operations of the components C1, C2and/or of the inserts I of the helmet to be obtained, determines a significant reduction of the production times for each helmet to be obtained with a consequent lowering of the relative production costs. It must also be noted that the support template makes it possible to cut the usual moulding steps in half since the moulding process can be carried out in a single step that provides for the formation of the base body of the helmet to be obtained with the components C1, C2and/or the inserts I of the helmet itself placed in the correct positions. Of course, the elimination of the conventional moulding step of the base body and the subsequent co-moulding step of the latter with the additional parts of the helmet, substantially reduces the production times for each piece to be made, allowing a significant increase in productivity of the moulding process. Finally, it should be considered that the precise positioning of the components C1, C2and/or of the inserts I of the helmet to be obtained inside the mould by means of the support template1totally eliminates or reduces as much as possible the presence of defects at the overlapping and/or juxtaposed areas between the upper shell and the lower ring or under-shell of the helmets made, ensuring the excellent quality thereof. | 10,365 |
11858184 | Reference Numerals:1. supporting frame of injection molding machine;2. standing post;3. fixing block;4. hydraulic rotor;5. feed port;6. screw;7. heating sleeve;8. hydraulic lifting rod;9. nozzle;10. fastening plate;11. connecting port;12. straight flow channel;13. curved flow channel;14. straight material channel;15. curved material channel;16. cooling plate;17. locking pin tube;18. perforated metal filter;19. upper molding plate;20. slider positioning block;21. upper molding cavity;22. upper mold closing positioning block;23. upper circular mold positioning block;24. lower molding plate;25. long-side slider;26. pneumatic pusher;27. lower circular mold positioning block;28. lower mold closing positioning block;29. pallet post;30. lower molding cavity; and31. wide-side slider. DETAILED DESCRIPTION OF THE EMBODIMENTS The present invention is described in further detail below with reference to the embodiments, and the described embodiments are only a part, rather than all of the embodiments of the present invention. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention without creative efforts should fall within the protection scope of the present invention. Embodiment 1 A production method of a floor for quick side-slide installation includes the following steps:Step 1: Sorting of floor blanks:The floor blanks are inspected one by one in terms of moisture content, dimensions and appearance quality. The moisture content is controlled within 6-11%. The dimensions meet a production requirement. The appearance quality meets a requirement of LY/T 2058-2012 Blanks for Solid Wood Flooring.Step 2: Curing of the floor blanks:the floor blanks qualified after being sorted in Step 1 are stacked in a curing area for curing for 25-30 days. The floor blanks are ventilated and protected from sun and rain. The curing area has a relative humidity of 25-55% and a temperature of 20-25° C. For each stack of floor blanks, five floor blanks are arranged in three rows at an equal interval to serve as a stack base, and the other floor blanks are stacked in layers. On each layer, three floor blanks are stacked vertically and seven floor blanks are stacked horizontally. The distance between adjacent stacks of floor blanks is at least 30-50 cm. The height of each stack does not exceed 1.5 m.Step 3: Sanding of the floor blanks:Top surfaces of the floor blanks qualified after being cured in Step 2 are sanded to be flat and smooth, without wavy patterns or missing sanding. An 80, 100 or 120-grit sanding belt is used for sanding.Step 4: Cutting-to-thickness of the floor blanks:the floor blanks qualified after being sanded in Step 3 are put on a conveyor belt, and respective back surfaces of the floor blanks faces up during feeding. A back-groove-free tool is used to cut the floor blanks to a thickness in accordance with the production requirement; and after cutting-to-thickness, the back surfaces of the floor blanks are flat and smooth without indentation.Step 5: Surface treatment of the floor blanks:The top surfaces of the floor blanks are treated to be flat or non-flat. When the top surfaces of the floor blanks are required to be non-flat, the top surfaces of the floor blanks are hand-scraped or wire-brushed. When the top surfaces of the floor blanks are required to be flat, the top surfaces of the floor blanks need no treatment. In Step 5, the floor blanks are specifically hand-scraped as follows. The floor blanks qualified after cutting-to-thickness in Step 4 are put on a conveyor belt, and respective back surfaces of the floor blanks faces up during feeding to enter a planer. Swing arcs of six swing knives of the planer are adjusted, where any two swing knives are adjusted to have a slightly larger swing arc to avoid excessive overlap with the rest four swing knives. The six swing knives have a depth of 0.2-0.4 mm. In Step 5, the floor blanks are specifically wire-brushed as follows. The floor blanks qualified after cutting-to-thickness in Step 4 are placed with respective back surfaces facing up, and are wire-brushed with a steel wire roller, which is provided with a 0.3-0.6 mm steel wire. Usually 6 sets of wire-brushing rollers are used, and the number of the wire-brushing roller sets specifically depends on the top surface effect on site.Step 6: Cutting-to-length and molding of the floor blanks:As shown inFIG.6, the floor blanks after being surface-treated in Step 5 are placed on a four-sided planer. First, a double-end milling device is used to cut the floor blanks to a length in accordance with the production requirement. A pre-cutting knife, a molding knife, a finishing knife and a locking knife of the four-sided planer are activated in sequence to mill two layers of fixing grooves at a periphery of the floor blanks, where upper fixing grooves and lower fixing grooves are retracted from the top surfaces to the back surfaces of the floor blanks. A boss with an inclined side is formed between a top surface of each of the upper fixing grooves and the top surface of a corresponding floor blank. An acute angle is defined between the inclined side and a top surface of a first groove. When the floor blank is cut to a length, an edge breakage is less than or equal to 4 mm. The shape of a cross section of the fixing groove includes, but is not limited to, a rectangle and a trapezoid.Step 7: Spraying of anti-cracking oil on the floor blanks:The anti-cracking oil is evenly sprayed by a spray gun onto peripheral end surfaces of the floor blanks after being milled in Step 6. During spraying, the floor blanks to be sprayed are aligned, and the anti-cracking oil is sprayed obliquely from top to bottom, and the top surfaces of the floor blanks should not be sprayed.Step 8: Plastic encapsulation of the floor blanks:The floor blanks after being sprayed with the anti-cracking oil in Step 7 are placed in a lower molding cavity30of a lower mold with respective surfaces facing down. Pneumatic pushers26push a long-side slider25and a wide-side slider31to move inward respectively to clamp a to-be-injected floor blank, and the lower mold and an upper mold are closed. A hydraulic rotor4starts to rotate and pressurize. A molten material passes through a nozzle9via straight flow channels12, curved flow channels13and locking pin tubes17to adhesive ports of an upper molding cavity21, and is injected into locking cavities at the periphery of the to-be-injected floor blank. Cooling water of 20-25° C. is pumped to cool the lower molding cavity30of the lower mold for 20-30 s. After the treatment lasts for a total of 40-50 s, the upper mold is moved upward to be separated from the lower mold. The long-side slider25and the wide-side slider31of the lower mold are separated under the action of the pneumatic pushers26. The injected locking floor is lifted by the pallet posts29, and the locking floor is separated from the molding cavity. An injection-molded frame and injection-molded locks formed on the injection-molded frame are formed at the periphery of the floor blank, and the injection-molded frame is hidden under the boss of the floor blank. As shown inFIG.1, in Step 8, an injection head includes standing posts2, a fixing block3, a hydraulic lifting rod8, a heating sleeve7, the hydraulic rotor4, a feed port5, a screw6and the nozzle9. The hydraulic rotor4is connected to a top of the screw6, and the nozzle9is connected to a bottom of the screw6. The feed port5is provided on the screw6and adjacent to the hydraulic rotor4. The fixing block3is fixedly sleeved outside the heating sleeve7. The standing posts2penetrate and are slidably connected to four corners of the fixing block3. Bottoms of the standing posts2are slidably fixed in sliding grooves provided in a supporting frame1of an injection molding machine. The supporting frame is provided with a first through hole opposite to the nozzle9. The hydraulic lifting rod8is provided between the supporting frame1of the injection molding machine and the fixing block3. In Step 8, when the injection head injects the molten material into the locking cavities, a flow rate of solute in the molten material is required to be greater than 22 g/10 min. After the molten material is solidified, the flow rate of the solute is tested to be greater than 16 g/10 min. The molten material is at 220-230° C. As shown inFIG.2, in Step 8, the upper mold includes a fastening plate10, a connecting port11, the straight flow channels12, the curved flow channels13, a cooling plate16and an upper molding plate19, which are arranged in sequence from top to bottom and connected to the supporting frame1of the injection molding machine. As shown inFIG.4, the upper molding cavity21is provided on the upper molding plate19. The fastening plate10is provided with a second through hole opposite to the first through hole. A third through hole is provided at a middle of the straight flow channels12. A top of the connecting port11is connected to the second through hole, and a bottom of the connecting port11is connected to the third through hole. At least two straight material channels14are embedded in the straight flow channels12, and inlets of all the straight material channels14communicate with the third through hole. The number of the curved flow channels13is equal to the number of the straight material channels14. Fourth through holes are respectively provided at middles of the curved flow channels13. Outlets of the straight material channels14respectively communicate with the fourth through holes. a plurality of curved material channels15are embedded in the curved flow channels13. In this embodiment, the curved flow channels13are embedded with 40 curved material channels, and inlets of all the curved material channels15respectively communicate with the fourth through holes. The cooling plate16is embedded with locking pin tubes17, where the number of the locking pin tubes17is identical to the number of the curved material channels15. Outlets of the curved material channels15respectively communicate with inlets of the locking pin tubes17in one-to-one correspondence. The upper molding cavity21is internally provided with lock-shaped adhesive ports, and the outlets of the locking pin tubes17respectively communicate with the adhesive ports of the upper molding cavity21. Specifically, electric heating tubes for heat preservation of the molten material are respectively provided outside the straight flow channels12and the curved flow channels13. Four straight flow channels12are arranged and distributed in an X shape. In Step 8, the lower mold includes a pallet supporting frame, a lower molding plate24provided on the pallet supporting frame, the lower molding cavity30provided on the lower molding plate24and matched with the upper molding cavity21, the long-side slider25provided on the lower molding plate24and corresponding to a long side of the lower molding cavity30, the wide-side slider31provided on the lower molding plate24and corresponding to a wide side of the lower molding cavity30, and the pallet posts29. The bottoms of the pallet posts29are fixedly connected to the pallet supporting frame. The lower molding plate24in the lower molding cavity30is provided with ejection holes matched with the pallet posts29. The tops of the pallet posts29are provided in the ejection holes, and the tops of the pallet posts29are movable upward along the ejection holes to stick out of the ejection holes. The pneumatic pushers26are respectively connected to the long-side slider25and the wide-side slider31. In this embodiment, there are 8 pallet posts29, which lift the injection-molded floor in the molding cavity up and out of the molding cavity when the long-side slider and wide-side slider31of the lower mold are separated from the molding cavity. In Step 8, the mold is heated before the injection molding of the injection-molded locks of the floor blank. The cooling water in the upper molding plate19is connected, and the electric heating tubes outside the straight flow channels12and the curved flow channels13are turned on. The temperature of the electric heating tubes and the temperature of the nozzle9of the injection head are set to 220-230° C. When the temperature of the electric heating tubes outside the straight flow channels12and the curved flow channels13rises to 160° C., electric heating tubes outside the locking pin tubes17are turned on and set to 220-230° C. In Step 8, when the injection head injects the molten material into the locking cavities, 120-130 g of molten material is injected into the injection-molded locks of a single injection-molded floor blank. When the molten material passes through the nozzle9to the straight flow channels12, the molten material is delivered at a pressure of 72 bar and a flow rate of 92% for a stroke of 58 mm. When the molten material passes through the straight flow channels12to the curved flow channels13, the molten material is delivered at a pressure of 28 bar and a flow rate of 68% for a stroke of 38 mm. When the molten material passes through the curved flow channels13to the locking pin tubes17, the molten material is delivered at a pressure of 15 bar and a flow rate of 48% for a stroke of 28 mm. When the molten material passes through the locking pin tubes17to the adhesive ports of the upper molding cavity21, the molten material is delivered at a pressure of 52 bar for 0.13 s and at a flow rate of 48%. The injection parameters are adjusted according to a gap between the molding cavity and the wood, and the total weight of the material flowing out of all the adhesive ports is approximately the material weight. Step 9: Application of a paint to the back surfaces of the floor blanks:The back surface of the floor blank after being subjected to plastic-encapsulation in Step 8 is sanded by a sander, and then a paint is applied on the back. In Step 9, the paint is applied to the back surfaces of the floor blanks as follows. The back surface of the floor blank is sanded, the dust left on the back surfaces of the floor blanks due to sanding is absorbed, and a layer of transparent putty is coated evenly. The floor blank is subjected to ultraviolet (UV) semi-curing, sanded with sandpaper, and coated with a high-hardness primer and an ordinary primer in sequence. Then the floor blank is subjected to UV semi-curing, sanded with 240-grit sandpaper, and coated with a colored putty and a transparent primer. The amount of the paint applied is above 100 g/m2. Step 10: Application of the paint or vegetable oil to the floor blanks:The floor blanks after being applied with the paint on the back surfaces in step 9 are sanded by the sander, and then are applied with the paint or the vegetable oil. Embodiment 2 As shown inFIG.2, this embodiment is a further optimization of Embodiment 1. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 1 is as follows. In this embodiment, in Step 8, when the injection head injects the molten material into the locking cavities, 130 g of molten material is injected into the injection-molded locks of a single injection-molded floor blank. When the molten material passes through the nozzle9to the straight flow channels12, the molten material is delivered at a pressure of 78 bar and a flow rate of 96% for a stroke of 65 mm. When the molten material passes through the straight flow channels12to the curved flow channels13, the molten material is delivered at a pressure of 32 bar and a flow rate of 72% for a stroke of 42 mm. When the molten material passes through the curved flow channels13to the locking pin tubes17, the molten material is delivered at a pressure of 21 bar and a flow rate of 52% for a stroke of 32 mm. When the molten material passes through the locking pin tubes17to the adhesive ports of the upper molding cavity21, the molten material is delivered at a pressure of 58 bar for 0.17 s and at a flow rate of 52%. Embodiment 3 As shown inFIG.2, this embodiment is a further optimization of Embodiment 2. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 2 is as follows. In this embodiment, in Step 8, when the injection head injects the molten material into the locking cavities, 120-130 g of molten material is injected into the injection-molded locks of a single injection-molded floor blank. When the molten material passes through the nozzle9to the straight flow channels12, the molten material is delivered at a pressure of 75 bar and a flow rate of 95% for a stroke of 60 mm. When the molten material passes through the straight flow channels12to the curved flow channels13, the molten material is delivered at a pressure of 30 bar and a flow rate of 70% for a stroke of 40 mm. When the molten material passes through the curved flow channels13to the locking pin tubes17, the molten material is delivered at a pressure of 18 bar and a flow rate of 50% for a stroke of 30 mm. When the molten material passes through the locking pin tubes17to the adhesive ports of the upper molding cavity21, the molten material is delivered at a pressure of 55 bar for 0.15 s and at a flow rate of 50%. Embodiment 4 As shown inFIG.2, this embodiment is a further optimization of Embodiment 3. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 3 is as follows. In this embodiment, Each of the locking pin tubes17is provided with a T-shaped cross section. The locking pin tubes17are hollow inside, and each of the locking pin tubes17is internally provided with a perforated metal filter18. A diameter of the perforated metal filter18and a diameter of each of the outlets of the locking pin tubes17are equal to be 1 mm. Each of the locking pin tubes17is externally provided with an electric heating tube for heat preservation of the molten material. As shown inFIG.3, the locking pin tubes17have a T-shaped design, and are externally provided with a spiral electric heating sleeve7for heat preservation of the molten material. The locking pin tubes are hollow inside and each are provided therein with a perforated metal filter18. The perforated metal filter18has a diameter of 1 mm, which is matched with the 1 mm outlet of the corresponding locking pin tube17to prevent a pin gate from being blocked by foreign matter in the molten material. The outlet of the locking pin tube17is connected to a corresponding adhesive port of the upper molding cavity21. Under normal pressure, the molten material in the 1 mm pin gate will not flow out in a large amount, which plays a role of throttling. The locking pin tubes17are embedded in a channel of the cooling plate16, and cooling water is passed into the cooling plate16to keep the temperature of the upper molding cavity21not higher than 40° C., and prevent any material change or size deviation of the molding cavity due to high temperature. Embodiment 5 As shown inFIG.4, this embodiment is a further optimization of Embodiment 4. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 4 is as follows. In this embodiment, four corners of the upper molding plate19of the upper mold are provided with upper circular mold positioning blocks23. Upper mold closing positioning blocks22are provided at a center of the upper molding plate19in the upper molding cavity21. The upper molding plate19is provided with slider positioning blocks20opposite to a long side and a wide side of the upper molding cavity21respectively. Four corners of the lower molding plate24of the lower mold are provided with lower circular mold positioning blocks27that are respectively matched with the upper circular mold positioning blocks23. Lower mold closing positioning blocks28matched with the upper mold closing positioning blocks22are provided at a center of the lower molding plate24. The slider positioning blocks20are used to ensure the precise positioning of the long-side slider25and the wide-side slider31of the lower mold when the upper and lower molds are closed, so as to ensure the position accuracy of the molten material flowing and molding at the periphery of the floor blank. Embodiment 6 As shown inFIG.5, this embodiment is a further optimization of Embodiment 5. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 5 is as follows. In this embodiment, there are two lower molds, namely a lower left mold and a lower right mold. A floor blank is placed in the lower molding cavity30of the lower left mold. The to-be-injected floor blank is clamped through the pneumatic pushers26, and the supporting frame1of the injection molding machine slides up to the upper mold along the sliding grooves, such that the lower left mold is closed with the upper mold to complete injection. After the lower left mold is separated from the upper mold, the lower right mold is subjected to the same operation to be closed with the upper mold to complete injection. When the floor blank is picked and placed into one mold, the floor blank in the other mold is injection-molded, which improves production efficiency. Embodiment 7 This embodiment is a further optimization of Embodiment 6. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 6 is as follows. In this embodiment, the floor blanks include, but are not limited to, pure solid wood floors, glued floors, fiber floors, shavings floors, artificial floors, parallel-to-grain wood laminate floors, joinery floors, finger-jointed floors, integrated floors, wood-plastic composite floors, magnesium oxide floors and bamboo parquet floor. Embodiment 8 This embodiment is a further optimization of Embodiment 7. The same parts between the two embodiments will not be repeated, and the improvement of this embodiment based on Embodiment 7 is as follows. In this embodiment, the molten material is composed of polyethylene, polypropylene, polyvinyl chloride, polystyrene, polyoxymethylene, polychromium carbonate, an acrylic plastic, polyolefin, a polyolefin copolymer, polysulfone, polyphenylene ether, a thermoplastic composed of chlorinated polyether, a hot melt adhesive, or a thermoplastic elastomer. The above described are merely preferred embodiments of the present invention, and not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention should all fall within the scope of protection of the present invention. | 22,816 |
11858185 | DESCRIPTION OF EMBODIMENTS Embodiments of the movable platen, the opening/closing apparatus and the molding apparatus according to the present invention will now be described with reference to the attached drawings. FIG.1is a side view schematically showing a mold clamping apparatus according to an embodiment in which the present invention is applied to a mold clamping apparatus of an injection molding machine (molding apparatus). InFIG.1, reference numeral14denotes the entire mold clamping apparatus. A fixed die plate (fixed platen)20is fixed at one end of a frame11of the mold clamping apparatus (opening/closing apparatus)14. A link housing (pressure-receiving platen)24is disposed at the other end of the frame11. A movable die plate (movable platen)22, located between the fixed die plate20and the link housing24, is movably installed on the frame11. A fixed mold21is mounted to the fixed die plate20, while a movable mold23is mounted to the movable die plate22. The fixed mold (the other mold, one mold)21and the movable mold (one mold, the other mold)23constitute a mold19. A cavity25for a molded product is formed in the mold19, i.e. when the fixed mold21and the movable mold23are closed. The fixed die plate20and the link housing24are connected via a plurality of (e.g. four) tie bars36. When clamping the mold19after closing the movable mold23and the fixed mold21, the tie bars36are subjected to a mold clamping force exerted by a toggle link mechanism (opening/closing mechanism, mold opening/closing mechanism, mold clamping mechanism)28. As shown inFIG.1, the toggle link mechanism28includes, for example, a pair of upper toggle links and a pair of lower toggle links, each toggle link consisting of a first link30, a second link31and a third link32.FIG.1shows one of the upper toggle links and one of the lower toggle links. All the toggle links have the same construction. One end of the first link30is connected to the link housing24via a toggle pin34. The other end of the first link30is connected to one end of the second link31via a toggle pin35. The other end of the second link31is connected to a mold clamping platen42, constituting the movable die plate22, via a toggle pin37. The movable die plate22of this embodiment includes, for example, the mold clamping platen42connected to the toggle link mechanism28, and a mold mounting platen44to which the movable mold23is to be mounted. InFIG.1, reference numeral26denotes a crosshead connected to the toggle link mechanism28. One end of the third link32is connected to the crosshead26via a toggle pin38. The other end of the third link32is connected to the first link30via a toggle pin39. In this embodiment the link housing24is provided with a servo motor (drive)40as a drive source for the toggle link mechanism28. A not-shown nut portion of a ball screw mechanism, which converts rotation of the servo motor40into a linear movement and transmits the movement to the toggle link mechanism28, is provided in the center of the crosshead26. A ball screw27is in engagement with the nut portion. Rotation of the servo motor40is transmitted to the ball screw27via a timing belt29. Movement of the crosshead26in the mold opening/closing directions is guided by a not-shown guide which is supported by arm portions24aextending from the link housing24in the mold closing direction. The first links30and the second links31of the toggle link mechanism28, shown inFIG.1, are in an extended state. When the crosshead26moves rightward, the first links30and the second links31extend, thereby advancing the movable die plate22and closing the mold. A mold clamping force is generated by further pressing the movable mold23against the fixed mold21in the mold closing direction after contact of the movable mold23with the fixed mold21. On the other hand, when the crosshead26moves leftward inFIG.1, the first links30and the second links31are bent by the third links32, whereby the movable die plate22moves backward and opens the mold19. FIG.2shows a linear guide device50for supporting the movable die plate22and guiding its movement. The linear guide device50(guide device, guide mechanism)50includes, for example, a pair of platen surface support portions (support portions)52which support the movable die plate22and which are disposed on both sides of the mold mounting surface of the mold mounting platen44of the movable die plate22, a leg portion53constructed integrally with each platen surface support portion52, and a linear guide (linear motion guide)55held on the leg portion53and which slides on a guide rail (rail)54laid on a base11. Each linear guide55is, for example, comprised of a linear bearing having rollers or steel balls which roll on a rolling surface of the guide rail54. It is also possible to use a linear bearing that slides on a sliding surface with the use of lubrication oil. The linear guide55is in engagement with the guide rail54e.g. having a T-shaped cross section. Therefore, if a moment acts on the linear guide device50due to the weight of the movable mold23, the linear guide device50will not be detached from the guide rail54as will be described below. This prevents floating of the mold clamping platen42. The mold clamping platen42of the movable die plate22has a projecting portion62to which the mold mounting platen44is fixed. The mold mounting platen44has a larger size than the projecting portion62. The mold clamping platen42and the mold mounting platen44are supported on the platen surface support portions52in the following different manners. In this embodiment, part of the upper surface of each platen surface support portion52serves as a mold clamping platen support surface56on which the lower surface of the mold clamping platen42is seated. As shown inFIG.3, the mold clamping platen42is fastened (fixed) to the platen surface support portions52(more specifically the mold clamping platen support surfaces56) e.g. by using fastening members such as bolts58. On the other hand, as shown inFIG.2, the mold mounting platen44has a larger lateral size than the projecting portion62of the mold clamping platen42, and thus projects from the side surfaces of the mold clamping platen42. A horizontal seating surface60is formed in a stepped portion provided in the lower surface of the mold mounting platen44. A horizontal mold mounting platen support surface, on which the seating surface60of the mold mounting platen44is seated, is formed in an area, located outside the mold clamping platen support surface56, of each platen surface support portion52. A stepped portion need not necessarily be provided in the lower surface of the mold mounting platen44. Thus, the mold mounting platen44may be placed on the platen surface support portions52, with a flat lower surface of the mold mounting platen44in contact with the upper surfaces of the platen surface support portions52. In the embodiment illustrated inFIG.2, the mold mounting platen44has a large lateral size and is placed directly on the platen surface support portions52; however, the present invention is not limited to such a construction. Thus, in a possible case, the mold mounting platen44has a small lateral size and does not reach the platen surface support portions52. It is possible in that case to mount brackets to the lower surface of the mold mounting platen44or provide projecting portions projecting from the lower surface, and to place the mold mounting platen44on the platen surface support portions52via the brackets or the projecting portions. Unlike the mold clamping platen42, the mold mounting platen44is not fastened (fixed) to the platen surface support portions52by means of fastening members such as bolts, but simply placed (supported) on the platen surface support portions52. The mold mounting platen44is detachably mounted to the projecting portion62of the mold clamping platen42. The action and the effects of this embodiment will now be described with reference toFIGS.1through3. Referring toFIG.2, when the movable mold23is mounted to the mold mounting platen44of the movable die plate22, a moment acts on the mold clamping platen42due to the weight of the mold23. The moment acts in such a manner as to tilt the mold clamping platen42toward the movable mold23. If the mold clamping platen42and the mold mounting platen44are both simply placed (supported) on the platen surface support portions52of the linear guide device50, the mold clamping platen42will float and the movable mold23will tilt due to the moment that acts on the mold clamping platen42. According to the movable die plate22of this embodiment, on the other hand, the mold clamping platen42is fastened to the mold clamping platen support surfaces56of the linear guide device50by using fastening members such as bolts58. This can prevent the mold clamping platen42from floating even though the above moment acts on it, and can prevent the movable mold23from tilting or almost falling over. The amount of deformation of the mold clamping platen42upon clamping of the mold19is larger in its upper portion than that of its lower portion which is fastened (fixed) to the mold clamping platen support surfaces56, whereby distortion occurs in the mold clamping platen42. However, since the mold clamping platen42is connected, in the projecting portion62, to the mold mounting platen44, the distortion is intrinsically hardly transmitted to the mold mounting platen44. Unlike the mold clamping platen42, the mold mounting platen44is not fastened or fixed, but simply placed on the platen surface support portions52. Therefore, the lower side of the mold mounting platen44also deforms freely upon clamping of the mold19, leading to a small difference in deformation between the upper and lower portions of the mold mounting platen44. This makes the distribution of pressure on the mold19uniform, leading to enhancement of the quality of a molded product. By thus fastening (fixing) the mold clamping platen42to the platen surface support portions52of the linear guide device50while not fastening (fixing) but simply placing (supporting) the mold mounting platen44on the platen surface support portions52, it becomes possible to prevent floating of the mold clamping platen42and to reduce deformation of the mold mounting platen44. This makes it possible to perform mold clamping stably with high accuracy, thereby enhancing the quality of a molded product. FIGS.4and5illustrate a second embodiment of the present invention. The second embodiment illustrated inFIG.4differs from the first embodiment in that the platen surface support portions52of the linear guide device50of the first embodiment are each provided with a mechanism capable of adjusting the height of the mold mounting platen44. Therefore, a description will be given solely of the different feature of the second embodiment, and a detailed description of the same features as the first embodiment will be omitted. As shown inFIG.4, in the second embodiment, an adjustment mechanism70capable of adjusting the height of the mold mounting platen44is mounted (provided, installed) in each of the platen surface support portions52of the linear guide device50, disposed on both sides of the mold mounting surface of the mold mounting platen44. The adjustment mechanism70is, for example, comprised of a bolt71, a nut72and a mold mounting platen support member74. A lower portion of the bolt71is screwed into a female thread formed in the platen surface support portion52. The nut72is in engagement with the bolt71. The mold mounting platen support member74, which contacts the seating surface60of the mold mounting platen44and supports the mold mounting platen44, is coupled to the top of the bolt71. The adjustment mechanism70can change the height of the mold mounting platen support member74by rotating the bolt71while keeping the nut72loose. Thus, the adjustment mechanism70not only can support the mold mounting platen44but can also adjust the height of the mold mounting platen44. The mold mounting platen44can be held at an adjusted height by tightening the nut72after the adjustment. The second embodiment, in which the platen surface support portions52are provided with the adjustment mechanisms70capable of adjusting the height of the mold mounting platen44, can have the effect of being capable of adjusting the parallelism of the molds (parallelism of the movable mold23with respect to the fixed mold21) besides the effects described above with reference to the first embodiment. FIG.5illustrates a variation of the second embodiment. Instead of the adjustment mechanism70comprised of the bolt71, the nut72and the mold mounting platen support member74, the variation uses an adjustment mechanism70comprised of a jack75and the mold mounting platen support member74. According to this variation, the height of the mold mounting platen44can be adjusted by means of the jack75. This variation can achieve the same effects as the second embodiment illustrated inFIG.4. Another variation of the second embodiment uses an adjustment mechanism70comprised of a wedge (not shown) and the mold mounting platen support member74. According to this variation, the height of the mold mounting platen44can be adjusted by means of the wedge. This variation can also achieve the same effects as the second embodiment illustrated inFIG.4. Though in the above-described second embodiment the adjustment mechanism70is provided in each platen surface support portion52, the present invention is not limited to such a construction. Thus, it is possible to fix the adjustment mechanism70to the lower surface of the mold mounting platen44, and to adjust the height of the mold mounting platen44with the adjustment mechanism70in contact with each platen surface support portion52. Thus, the presence of the adjustment mechanism70between the mold mounting platen44and each platen surface support portion52is all that is needed. Though in the above-described embodiments the mold clamping platen42and the linear guide device50are constructed as separate structures, the mold clamping platen42may be constructed integrally with the linear guide device50. Thus, the mold clamping platen42may also have the function of the linear guide device50. Also in this case, as with the first embodiment of the present invention, the mold mounting platen44is mounted to the projecting portion62of the mold clamping platen42, and the mold mounting platen44is not fixed but supported in an unconstrained state on the platen surface support portions52of the linear guide device50. While the movable platen, the opening/closing apparatus and the molding apparatus according to the present invention have been described with reference to the embodiments in which they are applied in the injection molding machine, the present invention can also be applied in other molding apparatuses such as a die-casting machine. | 14,930 |
11858186 | It is noted at the outset that, in the different embodiments, identical parts are provided with identical reference numerals or identical component part designations, wherein the disclosures contained in the entire specification can analogously be applied to identical parts with identical reference numerals or identical component part designations. The positions chosen to be disclosed in the specification, such as top, bottom, side, etc., for example, also relate to the descriptive figure and, where there is a change in position, can be applied accordingly to the new position. Individual features or combinations of features from the exemplary embodiments shown and described can also constitute independent inventive solutions in their own right. It should be generally noted in regard to the exemplary embodiments ofFIGS.1through3that the use of an apparatus1of this type preferably applies to feeding single grains to an injection molding machine2, that is, in injection molding technology. FIGS.1through3show the apparatus1, in particular a single-grain dosing instrument1, for feeding single grains to a processing machine, in particular to an injection molding machine2, as schematically illustrated. The apparatus1comprises, at least at the basic level, a singulating device, or singulator, that is formed from a disk, in particular a singulating disk3. The singulating disk3is provided with suction openings5distributed over a pitch circle. The singulating disk3is mounted on or attached to a shaft6. In the exemplary embodiments shown, the apparatus is formed from at least two housing sections7,8, which are detachably attached to one another. Each housing section7,8is thereby respectively embodied as a cast aluminum chamber or container chamber. Of course, it is possible that the housing sections7,8can also be composed of different materials. In the design of the embodiments, it is essential that the second housing section8forms a negative-pressure chamber10. The non-moving negative-pressure chamber10, in particular the second housing section8, is simultaneously used for mounting a drive unit11, or drive, in particular of a stepper motor12that directly drives the singulating disk3, which is preferably formed from stainless steel, via the shaft6. The singulating disk3is connected in a positive fit or force fit to the shaft6, in particular the motor drive shaft, via a quick-action screw system13. Furthermore, the negative-pressure chamber10or the second housing section8in addition also comprises a suction nozzle14. Preferably, this nozzle is located in a grain receiving region15which defines the lower region of the apparatus1and is illustrated by dot-dashed lines. Additionally, the negative-pressure chamber10or the housing section8also has a grain discharge region16, also illustrated by dot-dashed lines, in which a blow-off nozzle17is located, wherein the blow-off nozzle17, in particular the grain discharge region16, is arranged such that it is separated from the negative-pressure region10in the grain receiving region15in an airtight mariner. Preferably, the suction nozzle14is connected to a suction inlet of a negative-pressure generating device, in particular a compressor18, and the outlet of the compressor18or of the negative-pressure generating device is connected to the blow-off nozzle17. It is thus achieved that, with one compressor18for example, both a negative pressure in the grain receiving region15and also an overpressure in the grain blow-off region16are generated. However, it is of course also possible that two different systems can be used for the two regions. The negative-pressure generator18thereby suctions the air away from the housing, in particular via the pressure chamber10, so that negative pressure is formed in the region of the suction openings5and/or in the pressure chamber10, and air thus continuously flows in through the suction openings5. Preferably, the singulating disk3is provided with approx. 40 small through bores or suction openings5, in particular between 0.5 mm and 5 mm, distributed uniformly over a smaller pitch circle diameter. Of course, it is possible that more or fewer through bores or suction openings5can be arranged. For a reliable operation of the single-grain conveying means, a bypass system19is arranged, as will be explained later, in the region of the negative-pressure generator18, in which system the negative-pressure generator suctions and discharges a partial airflow, always ambient air, via the bypass line. The bypass system19is preferably formed from bezels which define the volume flow rate and suction a partial flow as ambient air where necessary. A granular material reservoir20for bulk material, in particular for individual grains21, preferably of plastic granular material or plastic pellets or plastic grains21, is arranged in the grain receiving region15in the first housing section7. The granular material reservoir20is thereby arranged frontally on the singulating disk3, that is, on the surface of the disk, so that the grain21, in particular the bulk material21, lies directly against the singulating disk3. Here, it is essential that at least one, but preferably more, suction openings5on the singulating disk3dip into the granular material reservoir20, so that grains21can be received accordingly by the suction openings5. The apparatus1, in particular the single-grain dosing instrument1, functions such that a negative pressure is applied to or generated at the suction nozzle14, for example via the externally located compressor or negative-pressure generator18, whereby air or ambient air22in the negative-pressure chamber10is suctioned away, as schematically indicated by arrows, and a defined airflow is generated in the second housing section8. Through the negative pressure present, the singulating disk3is pressed onto the working seal or sealing element9, and via the through bores or suction openings5on the singulating disk3, ambient air22flows in uniformly from the first housing section7, which means that the ambient air or air22from the atmosphere is suctioned into the negative-pressure chamber10through the suction bores5of the singulating disk3via the first housing section7, and is suctioned from said chamber by the, preferably external, compressor18via the suction nozzle14. In this case, it is possible that a sensor is arranged in the negative-pressure chamber10, which sensor is connected to a controller24. Preferably, the pressure regulation is operated via a bypass system19in which the compressor18suctions and discharges a partial airflow, always ambient air, via the bypass line. Because the singulating disk3is driven and rotated, the singulating disk3and the suction openings5located thereon sweep through the plastic granular material or bulk material21in the granular material reservoir20in the grain receiving region15. The suction openings5arranged on the singulating disk3are thereby selected such that, for one suction opening5, preferably only one grain21each is ever suctioned by the inflowing air22and, due to the negative pressure present on the singulating disk3from the negative-pressure chamber10remains stuck, which means that the singulating disk3is arranged against the sealing element9or the air seal such that the disk can be rotated in the most airtight possible manner, as a result of which sufficient air22is sucked-in via the suction openings5. The more air22that can be suctioned, the greater the holding force on a suctioned grain21and therefore also the greater the built-up negative pressure. As a result of the further rotation, according to arrow23, the path leads to the grain discharge region16through individual grain scrapers24and/or grain aligners25. The grain scrapers24and/or grain aligners25have the task of scraping-off any grain21that is adhering and/or adhering to a suction grain21, as schematically indicated inFIG.2by an arrow27, and/or of aligning the grains21in a specific position. The grain scrapers24and/or grain aligners25are preferably attached to the housing section7in a springable and/or rotatable or elastic manner and graze or have a small distance from a front face28of the first singulating disk3. It is also possible that the grain scrapers24and/or grain aligners25themselves are formed from a spring-elastic material and always return to their starting position again upon being deformed. If a piece of bulk material21or a grain21is needed at a processing instrument, in particular the injection molding machine2, then the singulating disk3is further moved by the drive unit12such that a grain21received by a suction bore5is positioned across from the blow-off nozzle17so that this one grain21is pushed off or blown off of the suction opening5via the overpressure29generated by the compressor18. The grain21is thereby received in an exit nozzle30that is integrated in or arranged on the housing section7. From there, conveyance is possible to a processing or injection molding machine2via a feed device, in particular through a corresponding supply line31. The blow-off nozzle17in principle serves to facilitate the grain discharge, but is also used for cleaning microparticles out of the holes. Here, it is possible that a corresponding request for necessary volumes of material is sent, for example via a bus connection, by the processing machine2, in particular the controller thereof, whereupon the necessary number of grains21is determined by the controller24of the apparatus1and sent to the processing machine2. Of course, the number of grains21can also be transmitted, or everything can be preset and stored in the system, so that only a request signal is sent. In principle, it can be said that the throughput of individual grains21occurs through the defined dosing time, and that the resulting rotational speed or the grain discharge frequency is selected depending on the processing requirement. It is thereby possible that corresponding sensors, in particular an optical high-speed detection means or sensor32, can be used to determine the reliability against failure and control accordingly to prevent said failure. If no optical high-speed detection means is installed, a calibrating function must be provided. This calibrating function is necessary if the system is operated without a load cell, and a reliability against failure is determined using different rotational speeds. In addition, a grain-to-weight ratio for further dosing is established via the calibration. For this purpose, a semi-automatic operation multiple dosing samples are dosed at different rotational speeds. Once the dosing weights of the samples of the bulk material or grains21is entered, a dosing curve is calculated and is automatically used for the physical adjustment of the dosing parameters, in particular the rotational speed. If an optical sensor for grain detection is installed, a relationship for the ratio of grain21to weight must also be communicated to the controller240. The reliability against failure is in this case optically evaluated, and can accordingly have an immediate effect on the dosing-regulation process and, if necessary, compensate for a shortage. Furthermore,FIG.1illustrates in dashed lines a housing33to which the housing sections7,8are attached. The housing33is used to accommodate the additional components, such as the stepper motor12, the compressor18, lines, etc. Of course, it is also possible that all components are integrated into the housing sections7,8so that no additional housing33is required. In the exemplary embodiment shown, the position of the blow-off nozzle17is arranged at the highest point of the singulating disk3, wherein this point can also be located between this and the maximum fill level of the granular material reservoir20. As a result, there is more space for scraping off adhering grains21or for aligning the grains21. It is possible, for example, that in the front housing section7with the granular material reservoir20, preferably a transparent viewing panel40is embodied, as illustrated inFIG.3. As can be seen and has already been described above, the singulating disk3protrudes into the granular material reservoir20in the lower region, in particular in the grain receiving region15in which the granulate reservoir20is arranged, so that the suction openings5arranged such that they are distributed on the circumscribed circle are surrounded by grains21, and so that the grains21are suctioned as a result of the negative pressure, which means that multiple suction openings5are arranged in the granular material reservoir20at the same time and have contact with the individual grains21, so that it is ensured that a grain21is reliably received at a suction opening5. It is thereby also possible that, on the singulating disk3for receiving the bulk material, in particular below the grain discharge region16, a covering element34, or cover, that is used to close off or cover one or more suction openings5and is arranged on the front side or rear side of the singulating disk3. It is thus achieved that the inflow of air22into the open suction openings5, that is, into those openings in which no more grain21has been received, is minimized or prevented. Structurally, the suction covering element34can also be replaced or formed by the housing sections7,8or the sealing element9. Preferably, the singulating disk3for the grain21has a diameter of 300 mm. It is also possible that, if the system is appropriately sized, larger diameters, preferably between 200 mm and 1000 mm or more, of the singulating disks3can be used. So that the grains21do not remain stuck to one another or so that a type of cavity in the granular material reservoir20is not formed by the removal, a stirring or carry-along element37, or simply referred to as a stirrer or carry-along, or a blade wheel could be used for thoroughly mixing the bulk material21or the grains21is arranged on the singulating disk3for receiving bulk material or on the shaft6. It is thus achieved that the grains21are thoroughly mixed and/or are moved, and therefore that no cavities free of granular material form in the granulate reservoir20. To enable grains21to be refilled into the granular material reservoir20arranged in the housing section7, the granular material reservoir20, in particular the grain receiving region15, is connected to a plastic granular material feed38(seeFIG.3), which means that when the fill level falls below a defined range or defined weight, an automatic refilling of bulk material21from a supply container39to a maximum upper range or weight occurs. It is thereby possible that one or more load cells40for measuring the weight of the bulk material21contained are arranged in the region of the granulate reservoir20, or that the entire apparatus1is arranged on one or more load cells40and/or a weighing device41with the one or more load cells40(FIG.3) is arranged for determining the weight of the bulk material21filled or of the grains21. This can constitute a further option, a weight-loss measurement, also called loss-in-weight measurement, for the granular material reservoir; that is, the entire dosing unit1or the supply container is weighed gravimetrically and used for the grain/weight ratio. As schematically illustrated in the exemplary embodiments, the grain scrapers25and grain aligners26are arranged such that the suctioned grain21flows against an object, in particular a surface or edge, and the position of the grain21is slightly shifted, wherein the shifting is set, however, such that the grain21is not pushed out of the circle circumscribed by the suction openings5. It is thus achieved that any additional grains21adhering to the suctioned grain21are scraped off. It is also possible, for example, that a suction opening5is formed by a group of smaller openings. This has the advantage that, with very small grains21or a special grain shape, the grains21are not partially sucked into the suction openings5, which can potentially lead to catching or jamming during the blowing-off. It is thereby also possible that the suction openings5are provided with corresponding grating (not illustrated) so that a sucking-in of the grain21can likewise be prevented. The singulating disk3bears directly against the working seal9, in particular the sealing element9, or seal, which in turn is attached, in particular glued, to the housing section8. In this mariner, a leak-proof rotation of the singulating disk3and the housing section8, as shown inFIG.1, is ensured via the working seal9. As illustrated previously, the major advantage of this apparatus is that the most exact dosing is possible, whereby the quality is enormously improved and the costs for a possible overdosing to compensate for a more inaccurate dosing are reduced. In addition to the presently discussed apparatus1, a further enhancement of the dosing precision at the single-grain level is achieved through the control algorithm with the controller. Through the use of a quick-action screw system13, it is achieved that a rapid adaptation of the singulating disk3to the grain21, in particular to the grain sizes that are to be transported, can be carried out, which means that different singulating disks3having corresponding suction bores5can be used for differently sized grains21, so that when the bulk material21is changed, the singulating disk3is also changed at the same time. As a matter of form, it is noted that the invention is not limited to the embodiments shown, but rather can also include additional embodiments. | 17,603 |
11858187 | DESCRIPTION OF THE EMBODIMENTS The disclosure will be more fully described with reference to the drawings of the embodiments. However, the disclosure may be implemented in various forms and should not be limited to the embodiments described herein. The same or similar reference signs denote the same or similar components and will not be repeatedly described in the following paragraphs. FIG.1Ais a schematic view of a mold apparatus according to an embodiment of the disclosure. X-Y-Z coordinate axes are provided herein to facilitate description of the components. Referring toFIG.1A, a mold apparatus100aof this embodiment includes a mold110, a bearing structure120a, and a sensing module130a1. InFIG.1A, a cavity112of the mold110is schematically illustrated in a dot-chain line, but its shape and arrangement are not limited thereto. The bearing structure120ais adapted to provide structural support for the sensing module130a1. The sensing module130a1is adapted to sense at least one of a temperature and a pressure in the cavity112. The bearing structure120aof this embodiment is an ejector plate structure140a, and the sensing module130a1is disposed in the bearing structure120a(the ejector plate structure140a) to improve space utilization of the mold apparatus100a. The ejector plate structure140aincludes a pair of ejector plates142aand a plurality of ejector pins144. The pair of ejector plates142aare disposed outside the mold110, and the ejector pins144extend from the pair of ejector plates142atoward the cavity112of the mold110. The ejector pins144are adapted to eject components (not shown) in the cavity112out of the cavity112. The mold apparatus100aof this embodiment is adapted for an injection molding process but is not limited thereto. As shown inFIG.1A, the mold apparatus100aincludes two sensing modules130a1disposed corresponding to two ejector pins144. A portion of the sensing module130a1is disposed in the pair of ejector plates142a, and another portion of the sensing module130a1is disposed in the ejector pin144and extends to the cavity112of the mold110. Herein, one sensing module130a1extends to a position B1of the cavity112to measure the temperature and the pressure of the position B1. Another sensing module130a1extends to another position B2of the cavity112to measure the temperature and the pressure of the position B2. The mold apparatus100ameasures the temperature and the pressure of the two positions B1and B2respectively through the two sensing modules130a1. Herein, the positions B1and B2are any positions in the cavity112. In addition, the number of the ejector pins144of the ejector plate structure140ais not limited thereto, and the number of the sensing modules130a1and the arrangement thereof are also not limited thereto. The user may arrange the sensing module130a1according to the requirements to sense the temperature and the pressure of multiple positions of the cavity112. This is conducive to production, monitoring of process stability, and reduction in the manufacturing cost of the mold apparatus100a, and at the same time, provides a good data source for future development of smart manufacturing and smart molding. FIG.1Bis a schematic view of some components of the mold apparatus ofFIG.1A.FIG.1Bis a partial cross-sectional view ofFIG.1Aillustrating the arrangement relationship of one sensing module130a1, the bearing structure120a, and the mold110. Referring toFIG.1B, the sensing module130a1includes a temperature sensor132and a pressure sensor136. The temperature sensor132of this embodiment is an ejector-pin-type temperature sensor. The temperature sensor132has an extension structure133and an abutting portion P2. The extension structure133extends from the abutting portion P2along a movement axis M1. The pressure sensor136and the abutting portion P2are disposed (located) in an accommodating space122in the bearing structure120a(in the pair of ejector plates142a), and the extension structure133is disposed at the ejector pin144and extends toward the mold110. A sensing portion P1of the temperature sensor132is disposed at the extension structure133and is located in the mold110, and the sensing portion P1corresponds to the position B1in the cavity112. The temperature sensor132senses the temperature at the position B1in the cavity112through the sensing portion P1. The pressure sensor136corresponds to the abutting portion P2of the temperature sensor132. Herein, the temperature sensor132is movably disposed in the mold110and the bearing structure120aalong the movement axis M1, and the sensing portion P1and the abutting portion P2are respectively located at two opposite ends of the temperature sensor132on the movement axis M1. When the sensing portion P1of the temperature sensor132is subjected to a pressure from the position B1of the cavity112, the temperature sensor132is adapted to be pushed to move toward the pressure sensor136along the movement axis M1, and the abutting portion P2of the temperature sensor132is moved to push the pressure sensor136. In other words, the pressure sensor136is squeezed by the movement of the temperature sensor132to measure the pressure subjected at the position B1. Specifically, the pressure sensor136and the temperature sensor132are disposed coaxially (on the movement axis MD, and a sensing protrusion137of the pressure sensor136is also located on the movement axis M1. That is, the pressure sensor136and the temperature sensor132are built-in coaxially. The pressure sensor136senses the pressure based on a received pressure of the sensing protrusion137. As shown inFIG.1B, the sensing protrusion137of this embodiment faces the abutting portion P2of the temperature sensor132and is adapted to be directly abutted by the abutting portion P2. Therefore, the sensing module130a1is adapted to simultaneously measure the temperature and the pressure of the position B1in the cavity112through the temperature sensor132and the pressure sensor136. FIG.2AtoFIG.2Crespectively illustrate some components of a mold apparatus according to other embodiments of the disclosure. To clearly illustrate the arrangement of the temperature sensor132and the pressure sensor136, some components (e.g., the ejector pin144) are omitted from the illustration of the embodiments ofFIG.2AtoFIG.2C. Referring toFIG.1BandFIG.2Aat the same time, a sensing module130bof this embodiment is similar to the above embodiment, and the difference between the two lies in that the sensing protrusion137of this embodiment faces away from the abutting portion P2of the temperature sensor132, and the sensing protrusion137is adapted to abut against the bearing structure120aby the abutment of the abutting portion P2against the pressure sensor136. Specifically, the sensing protrusion137faces the inner surface of the bearing structure120a, the pressure sensor136has an abutting surface138opposite to the sensing protrusion137, and the abutting surface138faces the abutting portion P2. When the temperature sensor132is moved along the movement axis M1under pressure, the abutting portion P2directly abuts against the abutting surface138, so that the sensing protrusion137directly abuts against the inner surface of the bearing structure120a. In other words, at this time, the sensing protrusion137actually abuts against the inner surface of the bearing structure120a. Accordingly, it is learned that the sensing protrusion137may be directly or indirectly abutted by the abutting portion P2so that the pressure sensor136can sense a pressure. Therefore, the sensing module130bof this embodiment achieves the same effects as the above embodiment. Referring toFIG.1BandFIG.2Bat the same time, a sensing module130cof this embodiment is similar to the above embodiment, and the difference between the two lies in that a mold apparatus of this embodiment further includes a protection structure150a, and the protection structure150acovers the abutting portion P2of the temperature sensor132to provide structural protection. Herein, the protection structure150ahas a substantially C-shape to cover the abutting portion P2. The protection structure150ais disposed in the bearing structure120a, and the protection structure150ais located between the temperature sensor132and the pressure sensor136. As shown inFIG.2B, the temperature sensor132, the protection structure150a, and the pressure sensor136are disposed coaxially (on the movement axis MD, and the protection structure150ais movably disposed in the bearing structure120aand is adapted to be pushed by the temperature sensor132. Specifically, when the temperature sensor132is moved under pressure, the protection structure150adirectly abuts against the pressure sensor136along with the movement of the temperature sensor132. Herein, the sensing protrusion137faces the protection structure150a, and the protection structure150adirectly abuts against the sensing protrusion137. Of course, the arrangement of the sensing protrusion137is not limited thereto. For example, as shown inFIG.2A, the sensing protrusion137may face away from the protection structure150a(i.e., facing the inner surface of the bearing structure120a), so that the sensing protrusion137directly abuts against the inner surface of the bearing structure120a. In addition, to prevent deformation of the protection structure150adue to squeezing by the sensing protrusion137, the hardness of the protection structure150ais greater than the hardness of the sensing protrusion137. For example, if the hardness of the sensing protrusion137is 38 HRC, the hardness of the protection structure150ais greater than 38 HRC. Of course, the hardness of the sensing protrusion137is not limited thereto. Accordingly, the mold apparatus of this embodiment achieves effects similar to the above embodiment. Referring toFIG.2BandFIG.2Cat the same time, a sensing module130dand a protection structure150bof this embodiment are similar to the above embodiment, and the difference between the two lies in that the protection structure150bof this embodiment has a protrusion152which extends along the movement axis M1toward the pressure sensor136(i.e., in a direction away from the temperature sensor132). The protection structure150babuts against the pressure sensor136through the protrusion152. Herein, the sensing protrusion137directly abuts against the protrusion152, but the disclosure is not limited thereto. For example, the sensing protrusion137may face away from the protection structure150b(i.e., facing the inner surface of the bearing structure120a) as shown inFIG.2A, so that the sensing protrusion137directly abuts against the inner surface of the bearing structure120a. Accordingly, the protection structure150bof this embodiment achieves the same effects as the protection structure150aof the above embodiment. Of course, the configurations of the protection structures150aand150bare not limited to the above embodiments, and the user may design the protection structures150aand150baccording to the structural design requirements. According to the above, the temperature sensor132and the pressure sensor136may be arranged in multiple possible ways, and the mold apparatus may include the protection structures150aand150b. The arrangement of the mold apparatus100aand the sensing modules130a1shown inFIG.1Amay be one or a combination of the arrangements of the sensing modules130a1,130b,130c, and130dshown inFIG.1BtoFIG.2C. Specifically, the sensing protrusion137and the abutting portion P2are located on the same movement axis M1, and the sensing protrusion137corresponds to a pressure-sensing surface. The pressure-sensing surface varies according to the arrangement of the sensing protrusion137. The pressure sensor136is adapted to bear the abutting force applied by the abutting portion P2, so that the sensing protrusion137abuts against the pressure-sensing surface. For example, in the embodiment shown inFIG.1B, a pressure-sensing surface S1is the surface of the abutting portion P2. In the embodiment shown inFIG.2A, a pressure-sensing surface S2is the inner surface of the bearing structure120a. In the embodiment shown inFIG.2B, a pressure-sensing surface S3is the surface of the protection structure150a. In the embodiment shown inFIG.2C, a pressure-sensing surface S4is the surface of the protrusion152of the protection structure150b. Accordingly, the sensing modules130a1,130b,130c, and130dmay coaxially measure the temperature and the pressure of any position B1in the cavity112. FIG.3is a schematic view of a mold apparatus according to another embodiment of the disclosure. Referring toFIG.1AandFIG.3at the same time, a mold apparatus100bof this embodiment is similar to the above embodiment, and the difference between the two lies in that a bearing structure120bof this embodiment is not an ejector plate structure140b. The pair of ejector plates142bhave a through-hole143, and a sensing module130a2is inserted into the ejector plate structure140bthrough the through-hole143. The bearing structure120bis sleeved on one end of the sensing module130a2to provide structural protection, and the other end of the sensing module130a2extends into the cavity112(shown in a dotted line) of the mold110to measure the temperature and the pressure at a position B3of the cavity112. The arrangement of the temperature sensor132and the pressure sensor136of the sensing module130a2and/or the protection structures150aand150bis similar to the arrangement of the sensing modules130a1,130b,130c, and130dshown inFIG.1BtoFIG.2Cand will not be repeatedly described herein. Of course, the arrangement of the sensing module130a2is not limited thereto. For example, in another embodiment (not shown), the sensing module130a2is disposed outside the ejector plate structure140b, and a projection of the sensing module130a2onto the mold110does not overlap with a projection of the ejector plate structure140bonto the mold110. In another embodiment (not shown), the mold apparatus100bincludes the sensing module130a1and the sensing module130a2at the same time. The sensing modules130a1and130a2and the bearing structures120aand120bmay be arranged in multiple possible ways, and the user may arrange them according to the requirements. FIG.4is a cross-sectional view of the mold apparatus ofFIG.1A.FIG.4is a cross-sectional view taken along line A ofFIG.1A. Referring toFIG.1AandFIG.4, to prevent damage to the temperature sensor132(shown inFIG.1B) due to high temperature and limitation on the applicability of the temperature sensor132in different processes, the mold apparatus100aincludes a cooling flow path160, and the sensing module130a1is surrounded by the cooling flow path160. The cooling flow path160may be regarded as a mold sensor cooling structure which is adapted to reduce the temperature of the sensing module130a1. More specifically, the cooling flow path160may surround a portion of the temperature sensor132other than the sensing portion P1(shown inFIG.1A) to locally cool the temperature sensor132. Accordingly, the temperature sensor132can resist higher mold temperature, which improves the applicability of the temperature sensor132in different processes. Herein, the temperature sensor132is an optical fiber temperature sensor and includes a light receiving unit LR (shown inFIG.1BtoFIG.2C). The light receiving unit LR is disposed at the abutting portion P2to receive a temperature signal from the sensing portion P1. Since the temperature sensor132is an optical fiber temperature sensor, the temperature signal of the sensing portion P1is not affected by local temperature reduction of the temperature sensor132, and the temperature sensed by the temperature sensor132is thus not distorted. As shown inFIG.4, the cooling flow path160is located in the bearing structure120a(the pair of ejector plates142aof the ejector plate structure140a) and surrounds the extension structure133of the temperature sensor132. Herein, the cooling flow path160has a flow channel161, and the flow channel161has a water inlet162and a water outlet164. After a cooling liquid with low heat flows into the flow channel161through the water inlet162and exchanges heat with the two extension structures133, a cooling liquid with high heat leaves from the water outlet164. The flow channel161has a substantially C-shape and simultaneously surrounds and cools the two extension structures133. Of course, the design of the flow channel161of the cooling flow path160and the arrangement position thereof are not limited thereto. For example, in another embodiment (not shown), the cooling flow path160includes two flow channels161to respectively surround and cool the two extension structures133. In another embodiment (not shown), the cooling flow path160is located in the bearing structures120aand120band has a helical flow channel covering the abutting portion P2and/or the extension structure133. In another embodiment (not shown), the cooling flow path160is located in the protection structures150aand150bshown inFIG.2BandFIG.2Cto cool the abutting portion P2of the temperature sensor132. The user may configure the cooling flow path160according to the structural design requirements to achieve the effect of reducing the temperature of the temperature sensor132, so that the mold apparatus100acan be used in higher temperature processes, and the applicability of the mold apparatus100ain different processes can be improved. The cooling flow path160is not disposed in the mold110, so that the temperature of the mold110would not be affected by the cooling flow path160, and the mold110would not be hindered from reaching its working temperature. FIG.5is a schematic view of a mold apparatus according to another embodiment of the disclosure. Referring toFIG.5, a sensing module130eand a protection structure150cof this embodiment are disposed in the mold110. Specifically, the sensing module130eis covered by the protection structure150c, and the cooling flow path160is located in the protection structure150c. Herein, it is possible that the temperature sensor is not an ejector-pin-type temperature sensor. The sensing module130eand the cooling flow path160of this embodiment achieve effects similar to the above embodiment. In summary of the above, in the sensing module of the mold apparatus of the disclosure, since the temperature sensor and the pressure sensor are disposed coaxially (on the movement axis), the sensing module is adapted to simultaneously measure the temperature and the pressure of any position in the cavity, which reduces the installation and manufacturing costs of the sensor of the mold apparatus. Herein, the temperature sensor and the pressure sensor may be combined in multiple possible ways. Specifically, the temperature sensor is movably disposed in the mold and the bearing structure along the movement axis. The sensing portion of the temperature sensor senses the temperature of any position in the cavity and transmits the temperature signal to the abutting portion of the temperature sensor. When the temperature sensor is subjected to a pressure from this position, the abutting portion of the temperature sensor is moved along the movement axis and squeezes the pressure sensor to sense the pressure at this position. The sensing protrusion of the pressure sensor and the abutting portion are located on the same movement axis, and the sensing protrusion corresponds to a pressure-sensing surface. The pressure-sensing surface varies according to the arrangement of the sensing protrusion. For example, when the sensing protrusion faces the abutting portion, the pressure-sensing surface is the surface of the abutting portion. When the sensing protrusion faces the bearing structure, the pressure-sensing surface is the inner surface of the bearing structure. In addition, the mold apparatus may include a protection structure disposed between the temperature sensor and the pressure sensor, and the protection structure provides protection for the abutting portion. The protection structure abuts against the pressure sensor, and when the sensing protrusion faces the abutting portion, the pressure-sensing surface is the surface of the abutting portion. The hardness of the protection structure is greater than the hardness of the sensing protrusion. In addition, the mold apparatus of the disclosure further includes a cooling flow path to cool the temperature of the temperature sensor and prevent damage to the temperature sensor due to high temperature. The cooling flow path is located in the bearing structure and/or the protection structure, and the cooling flow path may exchange heat with a portion covering the temperature sensor other than the sensing portion to locally reduce the temperature of the temperature sensor. Accordingly, the temperature sensor can resist higher mold temperature, so that the applicability of the temperature sensor in different processes can be improved. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents. | 21,315 |
11858188 | DETAILED DESCRIPTION Injection Stretch Blow Molding Machine: Next, the present invention will be described in detail on the basis of an embodiment illustrated inFIGS.1to7. In the drawing, reference numeral1denotes an injection stretch blow molding machine configured to produce a hollow body made of a synthetic resin. The injection stretch blow molding machine1includes an injection molding section2, a blow molding section3, and an ejection section4arranged in this order at an angle of 120 degrees so that they are arranged in a circle, equally spaced apart as illustrated inFIG.1. The injection molding section2is configured to inject a molten resin into an injection molding mold to mold a preform that maintains its high temperature. In particular, the injection stretch blow molding machine1of the present embodiment is configured such that the preform can be released earlier in a state in which the preform is able to be stretched and blown in the blow molding section3. The molded preform is released while being held with a lip mold that is incorporated as a part of the injection molding mold. Then, it is transferred to the blow molding section3and placed in the blow molding mold. The blow molding section3is configured to stretch the preform held by the lip mold and blown the same with high pressure air or the like to mold a hollow body. The blow-molded hollow body is transferred to the ejection section4while being held with the lip mold. The ejection section4is configured to eject the hollow body formed in the blow molding section3from the molding machine. When the lip mold moves from the blow molding section3to the ejection section4, the lip mold as a split mold opens to release the constraint on the hollow body. As described above, the hollow body detached from the lip mold is ejected from the molding machine. Then, the lip mold having released the hollow body returns to the injection molding section2, so as to be incorporated into the injection molding mold for a preform as a part thereof. Therefore, the injection stretch blow molding machine1as described above is configured such that the preform molded in the injection molding section2is transferred to the blow molding section3with the lip mold, and stretched and blown into the hollow body in the blow molding section3. Then the hollow body is transferred to the ejection section4with the lip mold, where the lip mold releases the hollow body. Molding Cycle: In the injection stretch blow molding machine1, as illustrated inFIG.2, the following processes are continuously performed as a series of steps: injection molding processes110,120,130, . . . for injection molding a preform; blow molding processes210,220,230, . . . for stretching and blowing the preform to the hollow body as described above; and ejection processes310,320,330, . . . for ejecting the hollow body from the molding machine in the ejection section. In the present embodiment, the injection molding processes110,120,130, . . . , the blow molding processes210,220,230, . . . , and the ejection processes310,320,330, . . . , are combined into molding cycles for hollow body410,420,430, . . . , respectively. The lip mold moves for the above-described operations from the injection molding section2, to the blow molding section3, and then to the ejection section4. The lip mold finally returns to the injection molding section2in order to repeat molding cycles410,420,430, . . . for the hollow body. In the injection stretch blow molding machine1, three lip molds are used and arranged at three positions so that they align with the injection molding section2, the blow molding section3, and the ejection section4at the same time. The three lip molds are assembled to a rotary plate, which rotates 120 degrees in one direction and stay there. The rotary plate descends, and lifts after the injection and cooling are completed. Then, the rotary plate rotates further 120 degrees in the one direction and repeats. In this manner, the lip molds rotate through each position sequentially. Accordingly, in the injection stretch blow molding machine1, the three lip molds move among three positions, thereby processing through multiple molding cycles for a hollow body simultaneously while one stage apart. FIG.2schematically shows a state in the injection stretch blow molding machine1in which a molding cycle410as a single molding cycle for a hollow body and a next molding cycle420are simultaneously in process though one stage apart. As described above, the molding cycle410as the single molding cycle includes the injection molding process110, the blow molding process210, and the ejection process310in succession. The next molding cycle420, which occurs one stage behind the molding cycle410, includes the injection molding process120, the blow molding process220, and the ejection process320in succession. Moreover, the molding cycle430, which occurs one stage behind the molding cycle420, includes the injection molding process130, the blow molding process230, and the ejection process330in succession. As illustrated inFIG.2, the injection stretch blow molding machine1is configured to mold hollow bodies by the respective molding cycles410,420,430, . . . while they occur one stage behind the previous molding cycle. As illustrated inFIG.1, the injection molding section2includes an injection molding mold5and an injection apparatus6. The injection apparatus6is configured to inject a molten resin into the injection molding mold5in each of the injection molding processes110,120,130, . . . . Note that although the injection molding mold5is composed of a lip mold, an injection core mold, and an injection cavity mold (also including a hot runner device or the like),FIG.1represents the injection molding mold5showing a position where the injection cavity mold is located. Also note that neither the lip molds that are moved and stopped to be set on the injection cavity mold nor the injection core mold that enters the inside of the injection cavity mold are shown to facilitate the description of the arrangement of the injection molding section2. Injection Apparatus: FIG.3schematically shows the injection apparatus6. The injection apparatus6is an in-line screw type apparatus which has a barrel (heating cylinder)7having a cylinder8, and a rotatable screw9in the cylinder8. The screw9can freely rotate and move forward and rearward. The injection apparatus6supplies the chips of a resin material from a feed hopper10to the supply section of the screw9, and causes the screw9to move the supplied resin material from the compression section to the metering section. Shear heat is generated by this movement. In addition to the shear heat, heating by a heater11as well as mixing by screw rotation can plasticize and knead the resin material to generate a molten resin. The generated molten resin is fed ahead of the screw9forward. Thus, the molten resin positioned in front of the screw9is injected into the injection molding mold5. The heater11for facilitating plasticization of the resin material is disposed on the outer periphery of the barrel7. During each of the injection molding processes110,120,130, . . . in the molding cycles410,420,430, . . . , the injection apparatus6performs feeding the molten resin into the injection molding mold (filling510), suppressing backflow while maintaining an application of pressure to the molten resin having been fed into the injection molding mold (holding pressure520), and feeding a preset amount of the molten resin for injection to the front of the screw9(metering530). Filling510, holding pressure520, and metering530are collectively referred to as an injection cycle, which is continuously repeated in a series. Thus, a single injection cycle involves injecting (filling+holding pressure) and metering (seeFIG.4). The injection cycle is repeated in accordance with the advancement of processes performed by the molding machine main body side, that is, in accordance with the advancement of the molding cycle as will be described later. Injection in Injection Molding Process: The screw9is in the injection start position when the filling510is performed in each of the injection molding processes110,120,130, . . . in the respective hollow body molding cycles410,420,430, . . . . Then, in the present embodiment, the screw9in the injection start position at the time of starting injection by the injection apparatus6moves forward while rotating. The forward movement of the rotating screw9can achieve the injection of a preset amount of the molten resin into the injection molding mold. The filling510of the injection apparatus6is accomplished by applying a hydraulic pressure to move the screw9forward. Further, the end of the filling510of the preset amount of the molten resin is based on the screw position being measured. That is, when it is determined that the screw9reaches the switch-over position, the hydraulic pressure is switched to the back pressure for holding pressure. It should be noted that the screw9of the injection apparatus6of the present invention is not forcibly stopped by a stopper or any similar mechanism when it reaches the abutment position. In the injection apparatus6, as described above, the screw9rotates, for example, from the time of starting injection within the injection molding process110of the molding cycle410. The rotation of the screw9starts the generation of the molten resin for the filling510in the injection molding process120in the next molding cycle420. That is, the start of injection and the start of generation of the molten resin for the next shot are adjusted at the same time. During the filling510, a hydraulic pressure is applied to screw9in order to advance the screw9. The increased back pressure from the resin in front of the screw (injection pressure) closes a check ring (ring-shaped check valve located at the tip of the screw). Therefore, even when the generation of the molten resin is started by the rotation of the screw9, the molten resin does not flow from the metering section to the front of the screw. Holding Pressure in Injection Molding Process: After completing the filling510of the molten resin in each of the injection molding processes110,120,130, . . . , the injection apparatus6performs the holding pressure520while rotating the screw9continuously from the operation of the filling510. In addition, in the injection molding mold of each of the injection molding processes110,120,130, . . . , a transition is made from injection filling to cooling, so that the preform made of the molten resin is cooled. In the injection apparatus6, the back pressure set for holding pressure at the time of holding pressure520is applied to the screw9. Then, the screw9continues to rotate from the filling510, and continues to generate the molten resin for the next shot. Metering in Injection Molding Process: After completing the holding pressure520of the molten resin in each of the injection molding processes110,120,130, . . . , the injection apparatus6moves the screw9backward while rotating the screw9continuously from the screw rotation operation at the holding pressure520, thereby performing the metering530. The operation of the injection apparatus6for the metering530falls within the dry cycle time involving the mold opening, rotation, and mold closing at the injection molding section2. During the metering530of the injection apparatus6, the screw9moves backward while rotating when back pressure is applied to it. Thus, during the metering530, the screw9plasticizes and kneads the resin material as described above to feed a preset amount of molten resin ahead of the screw9forward. When the screw9moves backward and reaches the injection start position while feeding a preset amount of molten resin ahead of the screw forward, the backward movement is stopped. In the present embodiment, if the screw9moves backward to the injection start position, the rotation of the screw9is stopped. However, it is also possible to remain rotate while in that position. In the injection molding section2of the injection stretch blow molding machine1, when the injection molding process110of the molding cycle410for a hollow body is completed, the injection molding process120of the next molding cycle420is performed. Then, the injection apparatus6in the injection molding process120performs again the operation of the filling510, the holding pressure520, and the metering530. As described above, at the time of the filling510for the injection molding process, the plasticizing and kneading operations for the generation of the molten resin for the next shot (the amount to be injected in the next injection molding process130) is started at the same time as described above. In the injection apparatus6of the present embodiment, since the plasticizing and kneading for generating the molten resin for the next shot proceeds from the time of starting injection, the generation of the molten resin for the next shot can be started earlier than in the conventional injection apparatus in which the screw is started to rotate after the pressure holding is completed. Thus, the injection apparatus6of the present embodiment can finish the generation of the molten resin for the next shot earlier than the conventional injection apparatus. Furthermore, the injection stretch blow molding machine1can shorten the time required for each of the injection molding processes110,120,130, . . . in the respective molding cycles410,420,430, . . . for the hollow bodies that are performed one stage behind the former process. Thus, the time required for the molding cycles410,420,430, . . . for the hollow bodies is shortened, so that the production efficiency thereof is increased. In the injection apparatus6of the present embodiment, the screw9continuously rotates in the filling510, the holding pressure520, and the metering530. However, the number of screw revolutions per unit time is not necessarily equal among the filling510, the holding pressure520, and the metering530. The number of screw revolutions per unit time is variable in each of the filling510, the holding pressure520, and the metering530, and can be independently set. It should be noted that the number of screw revolutions per unit time for the filling510, the holding pressure520, and the metering530may differ. Also, the number of screw revolutions per unit time may be changed during each of the filling510, the holding pressure520, and the metering530. Practical Examples Test Method: An exemplary injection stretch blow molding machine for molding bottles that implements the present invention and another for comparative example were prepared for test. The test will now be described. Bottles produced by the injection stretch blow molding machines according to the practical example and the comparative example were made of polyethylene terephthalate (PET). The weight of the bottle was set to 96.5 g. An injection molding mold for simultaneously producing four bottles at a time was used for both the practical example and the comparative example. The bottles produced by the following two molding methods of the practical example and the comparative example were evaluated on the basis of qualities and molding data. First, molding conditions for molding a good bottle were searched for with the injection stretch blow molding machine of the practical example. The injection stretch blow molding machine of the practical example was controlled such that the operation of plasticizing and kneading the resin material by rotating the screw to generate the molten resin was started at the same time as the start of the injection time set in the injection molding process in the injection molding section (at the same time when injection of the injection apparatus is started). In addition, the time for the operation of generating the molten resin (rotation of the screw) was set to correspond to the time taken for one cycle (injection cycle) of the injection apparatus within the time corresponding to the injection molding process of the molding cycle of the bottle (i.e. molding cycle of the hollow body). Specifically, the number of screw revolutions was set at 38 rpm. Comparative Example In this comparative example, the injection stretch blow molding machine starts rotation of the screw only after applying holding pressure as in the conventional molding method. That is, the injection stretch blow molding machine starts metering the molten resin by rotating the screw only after applying holding pressure. In this comparative example, the number of screw revolutions for metering after pressure holding was the same as that of the practical example (38 rpm). Test Results of Practical Example: The time of the injection molding process in the molding cycle of the injection stretch blow molding machine of the practical example was 14.9 seconds, which includes 5.50 seconds for injection, 5.00 seconds for cooling, and 4.40 seconds for the dry cycle. The number of screw revolutions for generating the molten resin was 38 rpm as described above. The PET bottle was a conforming article and transparent. The injection apparatus in the practical example started rotation of the screw at the same time as the time of starting injection, and the injection filling time in the injection molding section was the same 1.75 seconds as for the comparative example (conventional molding method). As described above, the injection filling time of the practical example in the injection molding section and the injection filling time of the comparative example both last 1.75 seconds. Therefore, in the practical example, it is considered that the molten resin is not fed ahead of the screw forward while it is being injected. In the practical example, the graph ofFIG.5shows that once at the abutment position (encircled number 1) and the molten resin has been completely injected, the screw moves backward. The screw is still rotating to generate the molten resin, however, it is considered that the molten resin is fed ahead of the screw forward. In this way, metering can start immediately after the previous injection. The metering time of the injection apparatus in the practical example was 12.07 seconds. This is calculated by adding the difference, 3.75 seconds, between the injection time and the injection filling time (5.50-1.75) to the metering time, 8.32 seconds, that occurs between the cooling time and the dry cycle. See the graph ofFIG.5. The metering stroke of the screw rotating during the injection time (i.e., filling and pressure holding) was 20.5 mm (145.7-125.2), which is 35% of the injection stroke 57.9 mm (183.1-125.2). See the graph ofFIG.5. The standard deviation (not shown) of the resin pressure fluctuation in the barrel nozzle was calculated using nine consecutive shots for the bottle for quality inspection, but it was not significantly large, so it may be considered practically consistent. Test Results of Comparative Example: This comparative example is a molding method in which the metering is performed by rotating the screw only after applying holding pressure in the injection apparatus. The time of the injection molding process in the molding cycle of the injection stretch blow molding machine of the comparative example was 18.1 seconds. When the number of screw revolutions was 38 rpm, which is the same as in the above-described practical example, the metering could not be performed within the time for the injection molding process in the practical example. Therefore, the time of the injection molding process in the comparative example was set to 18.1 seconds. The qualities of the bottles produced by the comparative molding method were similar to those produced by the molding method according to the practical example. Graph ofFIG.5: In the graph ofFIG.5, a solid line indicates the practical example whereas a dashed-dotted line indicates the comparative example. As shown inFIG.5, the screw abutment position of the practical example is 125.2 mm, the position at the injection end time where the screw sits during pressure holding is 135.0 mm, and the screw-back position to where the screw returns to after the holding pressure is applied is 145.7 mm. That is, the metering stroke of the practical example during the injection time (filling and holding pressure time and time for screw-back) is 20.5 mm (This is calculated by finding the difference between “screw-back position: 145.7 mm” and “screw abutment position: 125.2 mm”). Note that the screw abutment position is the same for both in the practical example and the present comparative example. If the molten resin is stored ahead of the screw forward during the filling of the practical example, the screw should be rearward of this screw abutment position. Since the screw abutment position is the same for both in the practical example and the comparative example, it is inferred that the resin feed ahead of the screw forward is not performed during filling. The tables ofFIGS.6and7show the molding data (i.e., variations in bottle weights and barrel temperatures) obtained for the nine consecutive shots.FIG.6shows the results of the practical example, andFIG.7shows that of the comparative example. In barrel temperature setting, F section indicates the temperature set value at the barrel front, M section indicates the temperature set value at the barrel center, and R section indicates the temperature set value at the barrel rear. As is clear from the comparison between the practical example and the comparative example, the present invention can complete one injection cycle of the injection apparatus earlier. Therefore, the time required for the molding cycle of the hollow body by the injection stretch blow molding machine is also shortened, thereby enhancing the production efficiency of the hollow body. The above-described embodiments and the practical example illustrate aspects of the present invention, and the present invention is not limited to the above-described embodiments and practical examples. REFERENCE SIGNS LIST 1injection stretch blow molding machine2injection molding section5injection molding mold6injection apparatus7barrel8cylinder9screw10feed hopper11heater410,420,430molding cycle for hollow body110,120,130injection molding process210,220,230blow molding process310,320,330ejection process510injection520holding pressure530metering | 22,517 |
11858189 | DETAILED DESCRIPTION OF THE INVENTION FIG.1shows a schematic illustration of the transfer system1having the transportation device2for transporting the profile along the transportation path3, the latter here being indicated by an arrow inFIG.1. Feed tables4are disposed laterally of the transportation path3or the transportation device2, respectively, the feed tables4in turn being able to be displaced in relation to the transportation path3or to the transportation device2, respectively, thus perpendicularly to the transportation path3, by way of the guides5. The transportation device2in turn comprises clamping devices6for clamping the profiles during transportation. In order for both profile end portions to be processed, the feed tables are also disposed on both sides of the transportation path3. Moreover, different feed tables4are provided in succession in terms of the transportation path3. For the purpose of visualization, the feed tables4inFIG.1are not equipped with a tool or an injection device, respectively. The injection device7in turn is separately illustrated inFIG.2. This injection device7first comprises the actual injection-molding tool8which possesses a fixed injection apparatus9which is configured so as to be short in order to be accommodated on the feed table without being moved. The mold10into which the plastics material compound is in turn injected is in turn attached below the injection apparatus9. An assembly wall separates the side on which the transportation path3is situated from the side on which the injection device7is in turn situated. The injection device7in turn comprises a closing unit12which comprises a total of two closing installations13which in each case have two holding devices14, wherein the holding devices14are configured as pneumatic cylinders. The closing installations13furthermore comprise guides, for example, by way of which the pneumatic cylinders14are held and by way of which a contact pressure can be exerted on the contour parts15of the mold10. In this way, the contour parts can also be held together during the injection molding. The drive device16is partially mounted in the assembly plate11which firstly has the control ring17. The control rings17are in each case provided with a clearance18through which the profile P can be introduced into the region of the mold10. The displacement direction19of the feed table4is indicated by the arrow inFIG.2; the feed table4is displaced in this displacement direction19until the feed table4receives the profile P in the clearance18. The contour parts of the mold10are correspondingly moved by way of the control rings17. The opening of the mold is illustrated once again in detail inFIGS.3and4. The mold here comprises the contour parts15a,15bwhich are illustrated in the opened state inFIG.3. The contour parts15a,15bbear on the wedge-shaped slides20a,20b. The contour parts15a,15bare mounted such that, in a movement of the slides20a,20b, the contour parts15a,15bby way of the wedge are displaced perpendicularly to the movement of the slides20a,20b. FIG.4shows the difference between a closed and an opened mold10. The control rings17as part of the drive device16are once again illustrated inFIG.5, and the profile P is pushed through the clearance18and thus makes its way into the mold10, the contour parts15a,15bof the latter still being opened at this point in time. The laterally pivoted closing installations13are once again separately illustrated inFIG.6, on account of which access to the location which is provided for attaching the closing unit12with the contour parts15a,15bis achieved. Pneumatic cylinders14which are distributed among two closing installations13are provided for contact pressure. A higher contact pressure can be achieved by the number and distribution of the pneumatic cylinders; it is also possible for the pressure to be more uniformly distributed such that the overall injection pressure can also be increased because the molds10are held together in a more stable manner. The plastics material compound is guided to the molds by way of the clearance21. As is illustrated inFIG.6, the mold10is attached in the region illustrated with dashed lines. It is a common feature of all embodiments and refinements of the invention that the injection device comprises a mold for receiving and mounting the profile during the injection molding and for configuring the cavity required during the injection, wherein the mold has at least two contour parts for enclosing the profile in the part to be injection-molded during the injection and for configuring the cavity. The injection device herein has a closing unit for opening and closing the contour parts. The closing unit when moving the first feed table relative to the profile is configured for keeping the contour parts open until the profile is enclosed by the contour parts, wherein the transportation device is configured for moving, and preferably not rotating, the profile above all in the region of the processing position exclusively parallel to the transportation path. The closing unit and the injection device are assembled so as to be displaceable conjointly with the first feed table. The first feed table in the direction of the transportation path or of the profile, respectively, is displaceable until the contour parts in the closed state can enclose the profile in the part to be injection-molded. This measure not only enables a light weight construction mode but also enables a particular degree of flexibility when retooling the machine and numerous cost advantages as a result of the lower requirement in terms of material and space as well as the time advantage during handling. One operator can optionally also be dispensed with. A transfer system according to the present invention having a feed table of this type with an injection device is particularly suitable for being able to carry out injection procedures at pressures that are not excessively high, for example, injection procedures for end caps. LIST OF REFERENCE SIGNS 1Transfer system2Transportation device3Transportation path4Feed table5Guide6Clamping device7Injection device8Injection-molding tool9Hot runner/Injection apparatus10Mold12Closing unit13Closing installation14Holding device/Pneumatic cylinder15,15a,15b: Contour parts16Drive device17Control ring18Clearance19Displacement path/direction of the feed table20a,20bSlide21ClearanceP Profile | 6,440 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.